00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1914 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3175 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.085 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.087 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.193 Using shallow fetch with depth 1 00:00:00.193 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.193 > git --version # timeout=10 00:00:00.240 > git --version # 'git version 2.39.2' 00:00:00.240 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.459 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.471 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.483 Checking out Revision ea7646cba2e992b05bb6a53407de7fbcf465b5c6 (FETCH_HEAD) 00:00:09.483 > git config core.sparsecheckout # timeout=10 00:00:09.495 > git read-tree -mu HEAD # timeout=10 00:00:09.513 > git checkout -f ea7646cba2e992b05bb6a53407de7fbcf465b5c6 # timeout=5 00:00:09.537 Commit message: "ansible/inventory: Fix GP16's BMC address" 00:00:09.538 > git rev-list --no-walk ea7646cba2e992b05bb6a53407de7fbcf465b5c6 # timeout=10 00:00:09.639 [Pipeline] Start of Pipeline 00:00:09.652 [Pipeline] library 00:00:09.654 Loading library shm_lib@master 00:00:09.654 Library shm_lib@master is cached. Copying from home. 00:00:09.670 [Pipeline] node 00:00:09.679 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.681 [Pipeline] { 00:00:09.691 [Pipeline] catchError 00:00:09.694 [Pipeline] { 00:00:09.705 [Pipeline] wrap 00:00:09.712 [Pipeline] { 00:00:09.717 [Pipeline] stage 00:00:09.718 [Pipeline] { (Prologue) 00:00:09.881 [Pipeline] sh 00:00:10.163 + logger -p user.info -t JENKINS-CI 00:00:10.184 [Pipeline] echo 00:00:10.185 Node: WFP16 00:00:10.192 [Pipeline] sh 00:00:10.489 [Pipeline] setCustomBuildProperty 00:00:10.500 [Pipeline] echo 00:00:10.501 Cleanup processes 00:00:10.506 [Pipeline] sh 00:00:10.791 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.791 2974847 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.804 [Pipeline] sh 00:00:11.093 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.093 ++ grep -v 'sudo pgrep' 00:00:11.093 ++ awk '{print $1}' 00:00:11.093 + sudo kill -9 00:00:11.093 + true 00:00:11.109 [Pipeline] cleanWs 00:00:11.118 [WS-CLEANUP] Deleting project workspace... 00:00:11.118 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.124 [WS-CLEANUP] done 00:00:11.129 [Pipeline] setCustomBuildProperty 00:00:11.144 [Pipeline] sh 00:00:11.423 + sudo git config --global --replace-all safe.directory '*' 00:00:11.473 [Pipeline] nodesByLabel 00:00:11.475 Found a total of 2 nodes with the 'sorcerer' label 00:00:11.484 [Pipeline] httpRequest 00:00:11.488 HttpMethod: GET 00:00:11.488 URL: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:11.491 Sending request to url: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:11.502 Response Code: HTTP/1.1 200 OK 00:00:11.503 Success: Status code 200 is in the accepted range: 200,404 00:00:11.503 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:13.998 [Pipeline] sh 00:00:14.280 + tar --no-same-owner -xf jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:14.298 [Pipeline] httpRequest 00:00:14.302 HttpMethod: GET 00:00:14.303 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:14.303 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:14.319 Response Code: HTTP/1.1 200 OK 00:00:14.319 Success: Status code 200 is in the accepted range: 200,404 00:00:14.320 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:24.626 [Pipeline] sh 00:01:24.914 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:29.124 [Pipeline] sh 00:01:29.408 + git -C spdk log --oneline -n5 00:01:29.408 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:29.408 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:01:29.408 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:01:29.408 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:01:29.408 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:01:29.421 [Pipeline] } 00:01:29.441 [Pipeline] // stage 00:01:29.448 [Pipeline] stage 00:01:29.450 [Pipeline] { (Prepare) 00:01:29.466 [Pipeline] writeFile 00:01:29.484 [Pipeline] sh 00:01:29.768 + logger -p user.info -t JENKINS-CI 00:01:29.781 [Pipeline] sh 00:01:30.065 + logger -p user.info -t JENKINS-CI 00:01:30.078 [Pipeline] sh 00:01:30.363 + cat autorun-spdk.conf 00:01:30.363 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.363 SPDK_TEST_NVMF=1 00:01:30.363 SPDK_TEST_NVME_CLI=1 00:01:30.363 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.363 SPDK_TEST_NVMF_NICS=e810 00:01:30.363 SPDK_RUN_UBSAN=1 00:01:30.363 NET_TYPE=phy 00:01:30.371 RUN_NIGHTLY=1 00:01:30.387 [Pipeline] readFile 00:01:30.447 [Pipeline] withEnv 00:01:30.449 [Pipeline] { 00:01:30.458 [Pipeline] sh 00:01:30.738 + set -ex 00:01:30.738 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:30.738 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.738 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.738 ++ SPDK_TEST_NVMF=1 00:01:30.738 ++ SPDK_TEST_NVME_CLI=1 00:01:30.738 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.738 ++ SPDK_TEST_NVMF_NICS=e810 00:01:30.738 ++ SPDK_RUN_UBSAN=1 00:01:30.738 ++ NET_TYPE=phy 00:01:30.738 ++ RUN_NIGHTLY=1 00:01:30.738 + case $SPDK_TEST_NVMF_NICS in 00:01:30.738 + DRIVERS=ice 00:01:30.738 + [[ tcp == \r\d\m\a ]] 00:01:30.738 + [[ -n ice ]] 00:01:30.738 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:30.738 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:30.738 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:30.738 rmmod: ERROR: Module irdma is not currently loaded 00:01:30.738 rmmod: ERROR: Module i40iw is not currently loaded 00:01:30.738 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:30.738 + true 00:01:30.738 + for D in $DRIVERS 00:01:30.738 + sudo modprobe ice 00:01:30.738 + exit 0 00:01:30.748 [Pipeline] } 00:01:30.766 [Pipeline] // withEnv 00:01:30.772 [Pipeline] } 00:01:30.790 [Pipeline] // stage 00:01:30.801 [Pipeline] catchError 00:01:30.803 [Pipeline] { 00:01:30.820 [Pipeline] timeout 00:01:30.820 Timeout set to expire in 50 min 00:01:30.822 [Pipeline] { 00:01:30.838 [Pipeline] stage 00:01:30.840 [Pipeline] { (Tests) 00:01:30.856 [Pipeline] sh 00:01:31.141 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.141 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.141 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.141 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:31.141 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.141 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:31.141 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:31.141 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:31.141 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:31.141 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:31.141 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:31.141 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.141 + source /etc/os-release 00:01:31.141 ++ NAME='Fedora Linux' 00:01:31.141 ++ VERSION='38 (Cloud Edition)' 00:01:31.141 ++ ID=fedora 00:01:31.141 ++ VERSION_ID=38 00:01:31.141 ++ VERSION_CODENAME= 00:01:31.141 ++ PLATFORM_ID=platform:f38 00:01:31.141 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:31.141 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:31.141 ++ LOGO=fedora-logo-icon 00:01:31.141 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:31.141 ++ HOME_URL=https://fedoraproject.org/ 00:01:31.141 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:31.141 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:31.141 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:31.141 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:31.141 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:31.141 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:31.141 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:31.141 ++ SUPPORT_END=2024-05-14 00:01:31.141 ++ VARIANT='Cloud Edition' 00:01:31.141 ++ VARIANT_ID=cloud 00:01:31.141 + uname -a 00:01:31.141 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:31.141 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:33.681 Hugepages 00:01:33.681 node hugesize free / total 00:01:33.681 node0 1048576kB 0 / 0 00:01:33.681 node0 2048kB 0 / 0 00:01:33.681 node1 1048576kB 0 / 0 00:01:33.681 node1 2048kB 0 / 0 00:01:33.681 00:01:33.681 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:33.681 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:33.681 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:33.681 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:33.681 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:33.681 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:33.681 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:33.681 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:33.681 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:33.681 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:33.681 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:33.681 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:33.681 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:33.681 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:33.681 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:33.681 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:33.681 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:33.681 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:33.681 + rm -f /tmp/spdk-ld-path 00:01:33.681 + source autorun-spdk.conf 00:01:33.681 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.681 ++ SPDK_TEST_NVMF=1 00:01:33.681 ++ SPDK_TEST_NVME_CLI=1 00:01:33.681 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.681 ++ SPDK_TEST_NVMF_NICS=e810 00:01:33.681 ++ SPDK_RUN_UBSAN=1 00:01:33.681 ++ NET_TYPE=phy 00:01:33.681 ++ RUN_NIGHTLY=1 00:01:33.681 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:33.681 + [[ -n '' ]] 00:01:33.681 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.681 + for M in /var/spdk/build-*-manifest.txt 00:01:33.681 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:33.681 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:33.681 + for M in /var/spdk/build-*-manifest.txt 00:01:33.681 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:33.681 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:33.681 ++ uname 00:01:33.681 + [[ Linux == \L\i\n\u\x ]] 00:01:33.681 + sudo dmesg -T 00:01:33.681 + sudo dmesg --clear 00:01:33.681 + dmesg_pid=2975858 00:01:33.681 + [[ Fedora Linux == FreeBSD ]] 00:01:33.681 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.681 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.681 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:33.681 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:33.681 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:33.681 + [[ -x /usr/src/fio-static/fio ]] 00:01:33.681 + export FIO_BIN=/usr/src/fio-static/fio 00:01:33.681 + FIO_BIN=/usr/src/fio-static/fio 00:01:33.681 + sudo dmesg -Tw 00:01:33.681 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:33.681 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:33.681 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:33.681 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.681 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.682 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:33.682 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.682 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.682 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:33.682 Test configuration: 00:01:33.682 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.682 SPDK_TEST_NVMF=1 00:01:33.682 SPDK_TEST_NVME_CLI=1 00:01:33.682 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.682 SPDK_TEST_NVMF_NICS=e810 00:01:33.682 SPDK_RUN_UBSAN=1 00:01:33.682 NET_TYPE=phy 00:01:33.682 RUN_NIGHTLY=1 14:46:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:33.682 14:46:52 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:33.682 14:46:52 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:33.682 14:46:52 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:33.682 14:46:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.682 14:46:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.682 14:46:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.682 14:46:52 -- paths/export.sh@5 -- $ export PATH 00:01:33.682 14:46:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.682 14:46:52 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:33.682 14:46:52 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:33.682 14:46:52 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718110012.XXXXXX 00:01:33.682 14:46:52 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718110012.HqnZdn 00:01:33.682 14:46:52 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:33.682 14:46:52 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:33.682 14:46:52 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:33.682 14:46:52 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:33.682 14:46:52 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:33.682 14:46:52 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:33.682 14:46:52 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:33.682 14:46:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.682 14:46:52 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:33.682 14:46:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.682 14:46:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.682 14:46:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.682 14:46:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.682 Tue Jun 11 12:46:52 PM UTC 2024 00:01:33.682 14:46:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.942 LTS-43-g130b9406a 00:01:33.942 14:46:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:33.942 14:46:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.942 14:46:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.942 14:46:52 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:33.942 14:46:52 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:33.942 14:46:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.942 ************************************ 00:01:33.942 START TEST ubsan 00:01:33.942 ************************************ 00:01:33.942 14:46:52 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:33.942 using ubsan 00:01:33.942 00:01:33.942 real 0m0.000s 00:01:33.942 user 0m0.000s 00:01:33.942 sys 0m0.000s 00:01:33.942 14:46:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:33.942 14:46:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.942 ************************************ 00:01:33.942 END TEST ubsan 00:01:33.942 ************************************ 00:01:33.942 14:46:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:33.942 14:46:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:33.942 14:46:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:33.942 14:46:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:33.942 14:46:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:33.942 14:46:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:33.942 14:46:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:33.942 14:46:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:33.942 14:46:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:33.942 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:33.942 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:34.201 Using 'verbs' RDMA provider 00:01:46.995 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:01.892 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:01.892 Creating mk/config.mk...done. 00:02:01.892 Creating mk/cc.flags.mk...done. 00:02:01.892 Type 'make' to build. 00:02:01.892 14:47:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:01.892 14:47:18 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:01.892 14:47:18 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:01.892 14:47:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.892 ************************************ 00:02:01.892 START TEST make 00:02:01.892 ************************************ 00:02:01.892 14:47:18 -- common/autotest_common.sh@1104 -- $ make -j112 00:02:01.892 make[1]: Nothing to be done for 'all'. 00:02:10.015 The Meson build system 00:02:10.015 Version: 1.3.1 00:02:10.015 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:10.015 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:10.015 Build type: native build 00:02:10.015 Program cat found: YES (/usr/bin/cat) 00:02:10.015 Project name: DPDK 00:02:10.015 Project version: 23.11.0 00:02:10.015 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:10.015 C linker for the host machine: cc ld.bfd 2.39-16 00:02:10.015 Host machine cpu family: x86_64 00:02:10.015 Host machine cpu: x86_64 00:02:10.015 Message: ## Building in Developer Mode ## 00:02:10.015 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:10.015 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:10.015 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:10.015 Program python3 found: YES (/usr/bin/python3) 00:02:10.015 Program cat found: YES (/usr/bin/cat) 00:02:10.015 Compiler for C supports arguments -march=native: YES 00:02:10.015 Checking for size of "void *" : 8 00:02:10.015 Checking for size of "void *" : 8 (cached) 00:02:10.015 Library m found: YES 00:02:10.015 Library numa found: YES 00:02:10.015 Has header "numaif.h" : YES 00:02:10.015 Library fdt found: NO 00:02:10.015 Library execinfo found: NO 00:02:10.015 Has header "execinfo.h" : YES 00:02:10.015 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:10.015 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:10.015 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:10.015 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:10.015 Run-time dependency openssl found: YES 3.0.9 00:02:10.015 Run-time dependency libpcap found: YES 1.10.4 00:02:10.015 Has header "pcap.h" with dependency libpcap: YES 00:02:10.015 Compiler for C supports arguments -Wcast-qual: YES 00:02:10.015 Compiler for C supports arguments -Wdeprecated: YES 00:02:10.015 Compiler for C supports arguments -Wformat: YES 00:02:10.015 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:10.015 Compiler for C supports arguments -Wformat-security: NO 00:02:10.015 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:10.015 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:10.015 Compiler for C supports arguments -Wnested-externs: YES 00:02:10.015 Compiler for C supports arguments -Wold-style-definition: YES 00:02:10.015 Compiler for C supports arguments -Wpointer-arith: YES 00:02:10.015 Compiler for C supports arguments -Wsign-compare: YES 00:02:10.015 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:10.015 Compiler for C supports arguments -Wundef: YES 00:02:10.015 Compiler for C supports arguments -Wwrite-strings: YES 00:02:10.015 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:10.015 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:10.015 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:10.015 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:10.015 Program objdump found: YES (/usr/bin/objdump) 00:02:10.015 Compiler for C supports arguments -mavx512f: YES 00:02:10.015 Checking if "AVX512 checking" compiles: YES 00:02:10.015 Fetching value of define "__SSE4_2__" : 1 00:02:10.015 Fetching value of define "__AES__" : 1 00:02:10.015 Fetching value of define "__AVX__" : 1 00:02:10.015 Fetching value of define "__AVX2__" : 1 00:02:10.015 Fetching value of define "__AVX512BW__" : 1 00:02:10.015 Fetching value of define "__AVX512CD__" : 1 00:02:10.015 Fetching value of define "__AVX512DQ__" : 1 00:02:10.015 Fetching value of define "__AVX512F__" : 1 00:02:10.015 Fetching value of define "__AVX512VL__" : 1 00:02:10.015 Fetching value of define "__PCLMUL__" : 1 00:02:10.015 Fetching value of define "__RDRND__" : 1 00:02:10.015 Fetching value of define "__RDSEED__" : 1 00:02:10.015 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:10.015 Fetching value of define "__znver1__" : (undefined) 00:02:10.015 Fetching value of define "__znver2__" : (undefined) 00:02:10.015 Fetching value of define "__znver3__" : (undefined) 00:02:10.015 Fetching value of define "__znver4__" : (undefined) 00:02:10.015 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:10.015 Message: lib/log: Defining dependency "log" 00:02:10.015 Message: lib/kvargs: Defining dependency "kvargs" 00:02:10.015 Message: lib/telemetry: Defining dependency "telemetry" 00:02:10.015 Checking for function "getentropy" : NO 00:02:10.015 Message: lib/eal: Defining dependency "eal" 00:02:10.015 Message: lib/ring: Defining dependency "ring" 00:02:10.015 Message: lib/rcu: Defining dependency "rcu" 00:02:10.015 Message: lib/mempool: Defining dependency "mempool" 00:02:10.015 Message: lib/mbuf: Defining dependency "mbuf" 00:02:10.015 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:10.015 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:10.015 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:10.015 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:10.015 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:10.015 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:10.015 Compiler for C supports arguments -mpclmul: YES 00:02:10.015 Compiler for C supports arguments -maes: YES 00:02:10.015 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:10.015 Compiler for C supports arguments -mavx512bw: YES 00:02:10.015 Compiler for C supports arguments -mavx512dq: YES 00:02:10.015 Compiler for C supports arguments -mavx512vl: YES 00:02:10.015 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:10.015 Compiler for C supports arguments -mavx2: YES 00:02:10.015 Compiler for C supports arguments -mavx: YES 00:02:10.015 Message: lib/net: Defining dependency "net" 00:02:10.015 Message: lib/meter: Defining dependency "meter" 00:02:10.015 Message: lib/ethdev: Defining dependency "ethdev" 00:02:10.015 Message: lib/pci: Defining dependency "pci" 00:02:10.015 Message: lib/cmdline: Defining dependency "cmdline" 00:02:10.015 Message: lib/hash: Defining dependency "hash" 00:02:10.016 Message: lib/timer: Defining dependency "timer" 00:02:10.016 Message: lib/compressdev: Defining dependency "compressdev" 00:02:10.016 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:10.016 Message: lib/dmadev: Defining dependency "dmadev" 00:02:10.016 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:10.016 Message: lib/power: Defining dependency "power" 00:02:10.016 Message: lib/reorder: Defining dependency "reorder" 00:02:10.016 Message: lib/security: Defining dependency "security" 00:02:10.016 Has header "linux/userfaultfd.h" : YES 00:02:10.016 Has header "linux/vduse.h" : YES 00:02:10.016 Message: lib/vhost: Defining dependency "vhost" 00:02:10.016 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:10.016 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:10.016 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:10.016 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:10.016 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:10.016 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:10.016 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:10.016 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:10.016 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:10.016 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:10.016 Program doxygen found: YES (/usr/bin/doxygen) 00:02:10.016 Configuring doxy-api-html.conf using configuration 00:02:10.016 Configuring doxy-api-man.conf using configuration 00:02:10.016 Program mandb found: YES (/usr/bin/mandb) 00:02:10.016 Program sphinx-build found: NO 00:02:10.016 Configuring rte_build_config.h using configuration 00:02:10.016 Message: 00:02:10.016 ================= 00:02:10.016 Applications Enabled 00:02:10.016 ================= 00:02:10.016 00:02:10.016 apps: 00:02:10.016 00:02:10.016 00:02:10.016 Message: 00:02:10.016 ================= 00:02:10.016 Libraries Enabled 00:02:10.016 ================= 00:02:10.016 00:02:10.016 libs: 00:02:10.016 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:10.016 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:10.016 cryptodev, dmadev, power, reorder, security, vhost, 00:02:10.016 00:02:10.016 Message: 00:02:10.016 =============== 00:02:10.016 Drivers Enabled 00:02:10.016 =============== 00:02:10.016 00:02:10.016 common: 00:02:10.016 00:02:10.016 bus: 00:02:10.016 pci, vdev, 00:02:10.016 mempool: 00:02:10.016 ring, 00:02:10.016 dma: 00:02:10.016 00:02:10.016 net: 00:02:10.016 00:02:10.016 crypto: 00:02:10.016 00:02:10.016 compress: 00:02:10.016 00:02:10.016 vdpa: 00:02:10.016 00:02:10.016 00:02:10.016 Message: 00:02:10.016 ================= 00:02:10.016 Content Skipped 00:02:10.016 ================= 00:02:10.016 00:02:10.016 apps: 00:02:10.016 dumpcap: explicitly disabled via build config 00:02:10.016 graph: explicitly disabled via build config 00:02:10.016 pdump: explicitly disabled via build config 00:02:10.016 proc-info: explicitly disabled via build config 00:02:10.016 test-acl: explicitly disabled via build config 00:02:10.016 test-bbdev: explicitly disabled via build config 00:02:10.016 test-cmdline: explicitly disabled via build config 00:02:10.016 test-compress-perf: explicitly disabled via build config 00:02:10.016 test-crypto-perf: explicitly disabled via build config 00:02:10.016 test-dma-perf: explicitly disabled via build config 00:02:10.016 test-eventdev: explicitly disabled via build config 00:02:10.016 test-fib: explicitly disabled via build config 00:02:10.016 test-flow-perf: explicitly disabled via build config 00:02:10.016 test-gpudev: explicitly disabled via build config 00:02:10.016 test-mldev: explicitly disabled via build config 00:02:10.016 test-pipeline: explicitly disabled via build config 00:02:10.016 test-pmd: explicitly disabled via build config 00:02:10.016 test-regex: explicitly disabled via build config 00:02:10.016 test-sad: explicitly disabled via build config 00:02:10.016 test-security-perf: explicitly disabled via build config 00:02:10.016 00:02:10.016 libs: 00:02:10.016 metrics: explicitly disabled via build config 00:02:10.016 acl: explicitly disabled via build config 00:02:10.016 bbdev: explicitly disabled via build config 00:02:10.016 bitratestats: explicitly disabled via build config 00:02:10.016 bpf: explicitly disabled via build config 00:02:10.016 cfgfile: explicitly disabled via build config 00:02:10.016 distributor: explicitly disabled via build config 00:02:10.016 efd: explicitly disabled via build config 00:02:10.016 eventdev: explicitly disabled via build config 00:02:10.016 dispatcher: explicitly disabled via build config 00:02:10.016 gpudev: explicitly disabled via build config 00:02:10.016 gro: explicitly disabled via build config 00:02:10.016 gso: explicitly disabled via build config 00:02:10.016 ip_frag: explicitly disabled via build config 00:02:10.016 jobstats: explicitly disabled via build config 00:02:10.016 latencystats: explicitly disabled via build config 00:02:10.016 lpm: explicitly disabled via build config 00:02:10.016 member: explicitly disabled via build config 00:02:10.016 pcapng: explicitly disabled via build config 00:02:10.016 rawdev: explicitly disabled via build config 00:02:10.016 regexdev: explicitly disabled via build config 00:02:10.016 mldev: explicitly disabled via build config 00:02:10.016 rib: explicitly disabled via build config 00:02:10.016 sched: explicitly disabled via build config 00:02:10.016 stack: explicitly disabled via build config 00:02:10.016 ipsec: explicitly disabled via build config 00:02:10.016 pdcp: explicitly disabled via build config 00:02:10.016 fib: explicitly disabled via build config 00:02:10.016 port: explicitly disabled via build config 00:02:10.016 pdump: explicitly disabled via build config 00:02:10.016 table: explicitly disabled via build config 00:02:10.016 pipeline: explicitly disabled via build config 00:02:10.016 graph: explicitly disabled via build config 00:02:10.016 node: explicitly disabled via build config 00:02:10.016 00:02:10.016 drivers: 00:02:10.016 common/cpt: not in enabled drivers build config 00:02:10.016 common/dpaax: not in enabled drivers build config 00:02:10.016 common/iavf: not in enabled drivers build config 00:02:10.016 common/idpf: not in enabled drivers build config 00:02:10.016 common/mvep: not in enabled drivers build config 00:02:10.016 common/octeontx: not in enabled drivers build config 00:02:10.016 bus/auxiliary: not in enabled drivers build config 00:02:10.016 bus/cdx: not in enabled drivers build config 00:02:10.016 bus/dpaa: not in enabled drivers build config 00:02:10.016 bus/fslmc: not in enabled drivers build config 00:02:10.016 bus/ifpga: not in enabled drivers build config 00:02:10.016 bus/platform: not in enabled drivers build config 00:02:10.016 bus/vmbus: not in enabled drivers build config 00:02:10.016 common/cnxk: not in enabled drivers build config 00:02:10.016 common/mlx5: not in enabled drivers build config 00:02:10.016 common/nfp: not in enabled drivers build config 00:02:10.016 common/qat: not in enabled drivers build config 00:02:10.016 common/sfc_efx: not in enabled drivers build config 00:02:10.016 mempool/bucket: not in enabled drivers build config 00:02:10.016 mempool/cnxk: not in enabled drivers build config 00:02:10.016 mempool/dpaa: not in enabled drivers build config 00:02:10.016 mempool/dpaa2: not in enabled drivers build config 00:02:10.016 mempool/octeontx: not in enabled drivers build config 00:02:10.016 mempool/stack: not in enabled drivers build config 00:02:10.016 dma/cnxk: not in enabled drivers build config 00:02:10.016 dma/dpaa: not in enabled drivers build config 00:02:10.016 dma/dpaa2: not in enabled drivers build config 00:02:10.016 dma/hisilicon: not in enabled drivers build config 00:02:10.016 dma/idxd: not in enabled drivers build config 00:02:10.016 dma/ioat: not in enabled drivers build config 00:02:10.016 dma/skeleton: not in enabled drivers build config 00:02:10.016 net/af_packet: not in enabled drivers build config 00:02:10.016 net/af_xdp: not in enabled drivers build config 00:02:10.016 net/ark: not in enabled drivers build config 00:02:10.016 net/atlantic: not in enabled drivers build config 00:02:10.016 net/avp: not in enabled drivers build config 00:02:10.016 net/axgbe: not in enabled drivers build config 00:02:10.016 net/bnx2x: not in enabled drivers build config 00:02:10.016 net/bnxt: not in enabled drivers build config 00:02:10.016 net/bonding: not in enabled drivers build config 00:02:10.016 net/cnxk: not in enabled drivers build config 00:02:10.016 net/cpfl: not in enabled drivers build config 00:02:10.016 net/cxgbe: not in enabled drivers build config 00:02:10.016 net/dpaa: not in enabled drivers build config 00:02:10.016 net/dpaa2: not in enabled drivers build config 00:02:10.016 net/e1000: not in enabled drivers build config 00:02:10.016 net/ena: not in enabled drivers build config 00:02:10.016 net/enetc: not in enabled drivers build config 00:02:10.016 net/enetfec: not in enabled drivers build config 00:02:10.016 net/enic: not in enabled drivers build config 00:02:10.016 net/failsafe: not in enabled drivers build config 00:02:10.016 net/fm10k: not in enabled drivers build config 00:02:10.016 net/gve: not in enabled drivers build config 00:02:10.016 net/hinic: not in enabled drivers build config 00:02:10.016 net/hns3: not in enabled drivers build config 00:02:10.016 net/i40e: not in enabled drivers build config 00:02:10.016 net/iavf: not in enabled drivers build config 00:02:10.016 net/ice: not in enabled drivers build config 00:02:10.016 net/idpf: not in enabled drivers build config 00:02:10.016 net/igc: not in enabled drivers build config 00:02:10.016 net/ionic: not in enabled drivers build config 00:02:10.016 net/ipn3ke: not in enabled drivers build config 00:02:10.016 net/ixgbe: not in enabled drivers build config 00:02:10.016 net/mana: not in enabled drivers build config 00:02:10.016 net/memif: not in enabled drivers build config 00:02:10.016 net/mlx4: not in enabled drivers build config 00:02:10.016 net/mlx5: not in enabled drivers build config 00:02:10.016 net/mvneta: not in enabled drivers build config 00:02:10.016 net/mvpp2: not in enabled drivers build config 00:02:10.016 net/netvsc: not in enabled drivers build config 00:02:10.016 net/nfb: not in enabled drivers build config 00:02:10.016 net/nfp: not in enabled drivers build config 00:02:10.016 net/ngbe: not in enabled drivers build config 00:02:10.017 net/null: not in enabled drivers build config 00:02:10.017 net/octeontx: not in enabled drivers build config 00:02:10.017 net/octeon_ep: not in enabled drivers build config 00:02:10.017 net/pcap: not in enabled drivers build config 00:02:10.017 net/pfe: not in enabled drivers build config 00:02:10.017 net/qede: not in enabled drivers build config 00:02:10.017 net/ring: not in enabled drivers build config 00:02:10.017 net/sfc: not in enabled drivers build config 00:02:10.017 net/softnic: not in enabled drivers build config 00:02:10.017 net/tap: not in enabled drivers build config 00:02:10.017 net/thunderx: not in enabled drivers build config 00:02:10.017 net/txgbe: not in enabled drivers build config 00:02:10.017 net/vdev_netvsc: not in enabled drivers build config 00:02:10.017 net/vhost: not in enabled drivers build config 00:02:10.017 net/virtio: not in enabled drivers build config 00:02:10.017 net/vmxnet3: not in enabled drivers build config 00:02:10.017 raw/*: missing internal dependency, "rawdev" 00:02:10.017 crypto/armv8: not in enabled drivers build config 00:02:10.017 crypto/bcmfs: not in enabled drivers build config 00:02:10.017 crypto/caam_jr: not in enabled drivers build config 00:02:10.017 crypto/ccp: not in enabled drivers build config 00:02:10.017 crypto/cnxk: not in enabled drivers build config 00:02:10.017 crypto/dpaa_sec: not in enabled drivers build config 00:02:10.017 crypto/dpaa2_sec: not in enabled drivers build config 00:02:10.017 crypto/ipsec_mb: not in enabled drivers build config 00:02:10.017 crypto/mlx5: not in enabled drivers build config 00:02:10.017 crypto/mvsam: not in enabled drivers build config 00:02:10.017 crypto/nitrox: not in enabled drivers build config 00:02:10.017 crypto/null: not in enabled drivers build config 00:02:10.017 crypto/octeontx: not in enabled drivers build config 00:02:10.017 crypto/openssl: not in enabled drivers build config 00:02:10.017 crypto/scheduler: not in enabled drivers build config 00:02:10.017 crypto/uadk: not in enabled drivers build config 00:02:10.017 crypto/virtio: not in enabled drivers build config 00:02:10.017 compress/isal: not in enabled drivers build config 00:02:10.017 compress/mlx5: not in enabled drivers build config 00:02:10.017 compress/octeontx: not in enabled drivers build config 00:02:10.017 compress/zlib: not in enabled drivers build config 00:02:10.017 regex/*: missing internal dependency, "regexdev" 00:02:10.017 ml/*: missing internal dependency, "mldev" 00:02:10.017 vdpa/ifc: not in enabled drivers build config 00:02:10.017 vdpa/mlx5: not in enabled drivers build config 00:02:10.017 vdpa/nfp: not in enabled drivers build config 00:02:10.017 vdpa/sfc: not in enabled drivers build config 00:02:10.017 event/*: missing internal dependency, "eventdev" 00:02:10.017 baseband/*: missing internal dependency, "bbdev" 00:02:10.017 gpu/*: missing internal dependency, "gpudev" 00:02:10.017 00:02:10.017 00:02:10.017 Build targets in project: 85 00:02:10.017 00:02:10.017 DPDK 23.11.0 00:02:10.017 00:02:10.017 User defined options 00:02:10.017 buildtype : debug 00:02:10.017 default_library : shared 00:02:10.017 libdir : lib 00:02:10.017 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:10.017 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:10.017 c_link_args : 00:02:10.017 cpu_instruction_set: native 00:02:10.017 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:10.017 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:10.017 enable_docs : false 00:02:10.017 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:10.017 enable_kmods : false 00:02:10.017 tests : false 00:02:10.017 00:02:10.017 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:10.017 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:10.017 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.017 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:10.017 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.017 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:10.017 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.017 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:10.017 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.017 [8/265] Linking static target lib/librte_kvargs.a 00:02:10.017 [9/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:10.017 [10/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:10.017 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.017 [12/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:10.017 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:10.017 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.017 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.017 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:10.017 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:10.017 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:10.017 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:10.017 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:10.017 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:10.017 [22/265] Linking static target lib/librte_log.a 00:02:10.017 [23/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:10.017 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.017 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:10.017 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:10.017 [27/265] Linking static target lib/librte_pci.a 00:02:10.017 [28/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:10.017 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:10.017 [30/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.017 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:10.017 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:10.017 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:10.017 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:10.017 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:10.017 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:10.017 [37/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:10.017 [38/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:10.017 [39/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:10.017 [40/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:10.017 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.017 [42/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:10.277 [43/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:10.277 [44/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.277 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.277 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.277 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:10.277 [48/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.277 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:10.277 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.277 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.277 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:10.277 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:10.277 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:10.277 [55/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.277 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.277 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.277 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.277 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.277 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.277 [61/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:10.277 [62/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.277 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:10.277 [64/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:10.277 [65/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.277 [66/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:10.277 [67/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:10.277 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.277 [69/265] Linking static target lib/librte_ring.a 00:02:10.277 [70/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:10.277 [71/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:10.277 [72/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:10.277 [73/265] Linking static target lib/librte_meter.a 00:02:10.277 [74/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.277 [75/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:10.277 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:10.277 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.277 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.277 [79/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:10.277 [80/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:10.576 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:10.576 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.576 [83/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:10.576 [84/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:10.576 [85/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:10.576 [86/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:10.576 [87/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.576 [88/265] Linking static target lib/librte_telemetry.a 00:02:10.576 [89/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:10.576 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.576 [91/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:10.576 [92/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:10.576 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.576 [94/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.576 [95/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.576 [96/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.576 [97/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:10.576 [98/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:10.576 [99/265] Linking static target lib/librte_cmdline.a 00:02:10.576 [100/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:10.576 [101/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.576 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.576 [103/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:10.576 [104/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:10.576 [105/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:10.576 [106/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:10.576 [107/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.576 [108/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:10.576 [109/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:10.576 [110/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.576 [111/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.576 [112/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:10.576 [113/265] Linking static target lib/librte_timer.a 00:02:10.576 [114/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:10.576 [115/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.576 [116/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:10.577 [117/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.577 [118/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.577 [119/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:10.577 [120/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.577 [121/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.577 [122/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.577 [123/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:10.577 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:10.577 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:10.577 [126/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.577 [127/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.577 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:10.577 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:10.577 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:10.577 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:10.577 [132/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.577 [133/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.577 [134/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:10.577 [135/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:10.577 [136/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.577 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:10.577 [138/265] Linking static target lib/librte_mempool.a 00:02:10.577 [139/265] Linking target lib/librte_log.so.24.0 00:02:10.577 [140/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:10.577 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:10.577 [142/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:10.577 [143/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.577 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:10.577 [145/265] Linking static target lib/librte_eal.a 00:02:10.577 [146/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.577 [147/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:10.577 [148/265] Linking static target lib/librte_net.a 00:02:10.577 [149/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.577 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.577 [151/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.577 [152/265] Linking static target lib/librte_compressdev.a 00:02:10.577 [153/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.577 [154/265] Linking static target lib/librte_dmadev.a 00:02:10.577 [155/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:10.577 [156/265] Linking static target lib/librte_mbuf.a 00:02:10.860 [157/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:10.860 [158/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:10.860 [159/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:10.860 [160/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:10.860 [161/265] Linking static target lib/librte_rcu.a 00:02:10.860 [162/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:10.860 [163/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:10.860 [164/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:10.860 [165/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.860 [166/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:10.860 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:10.860 [168/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:10.860 [169/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:10.860 [170/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:10.860 [171/265] Linking static target lib/librte_reorder.a 00:02:10.860 [172/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:10.860 [173/265] Linking static target lib/librte_power.a 00:02:10.860 [174/265] Linking target lib/librte_kvargs.so.24.0 00:02:10.860 [175/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.860 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:10.860 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:10.860 [178/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:10.860 [179/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:10.860 [180/265] Linking static target lib/librte_security.a 00:02:10.860 [181/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:10.860 [182/265] Linking static target lib/librte_hash.a 00:02:10.860 [183/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:10.860 [184/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.860 [185/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.860 [186/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:10.860 [187/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.860 [188/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.860 [189/265] Linking static target drivers/librte_bus_vdev.a 00:02:10.860 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:10.860 [191/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.119 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:11.119 [193/265] Linking target lib/librte_telemetry.so.24.0 00:02:11.119 [194/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.119 [195/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:11.119 [196/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:11.119 [197/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:11.119 [198/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.119 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.119 [200/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.119 [201/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.119 [202/265] Linking static target drivers/librte_bus_pci.a 00:02:11.119 [203/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:11.119 [204/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.119 [205/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.119 [206/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.119 [207/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.378 [208/265] Linking static target lib/librte_cryptodev.a 00:02:11.378 [209/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:11.378 [210/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:11.378 [211/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:11.378 [212/265] Linking static target drivers/librte_mempool_ring.a 00:02:11.378 [213/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.378 [214/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.378 [215/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.378 [216/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.378 [217/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.636 [218/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.636 [219/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.636 [220/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.895 [221/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.895 [222/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:11.895 [223/265] Linking static target lib/librte_ethdev.a 00:02:11.895 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.463 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:12.463 [226/265] Linking static target lib/librte_vhost.a 00:02:13.031 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.407 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.684 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.622 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.622 [231/265] Linking target lib/librte_eal.so.24.0 00:02:20.882 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:20.882 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:20.882 [234/265] Linking target lib/librte_pci.so.24.0 00:02:20.882 [235/265] Linking target lib/librte_meter.so.24.0 00:02:20.883 [236/265] Linking target lib/librte_ring.so.24.0 00:02:20.883 [237/265] Linking target lib/librte_timer.so.24.0 00:02:20.883 [238/265] Linking target lib/librte_dmadev.so.24.0 00:02:21.143 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:21.143 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:21.143 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:21.143 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:21.143 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:21.143 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:21.143 [245/265] Linking target lib/librte_rcu.so.24.0 00:02:21.143 [246/265] Linking target lib/librte_mempool.so.24.0 00:02:21.402 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:21.402 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:21.402 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:21.402 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:21.402 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:21.662 [252/265] Linking target lib/librte_compressdev.so.24.0 00:02:21.662 [253/265] Linking target lib/librte_reorder.so.24.0 00:02:21.662 [254/265] Linking target lib/librte_net.so.24.0 00:02:21.662 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:21.662 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:21.662 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:21.662 [258/265] Linking target lib/librte_hash.so.24.0 00:02:21.662 [259/265] Linking target lib/librte_security.so.24.0 00:02:21.662 [260/265] Linking target lib/librte_cmdline.so.24.0 00:02:21.662 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:21.922 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:21.922 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:21.922 [264/265] Linking target lib/librte_power.so.24.0 00:02:21.922 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:21.922 INFO: autodetecting backend as ninja 00:02:21.922 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:23.303 CC lib/log/log.o 00:02:23.303 CC lib/log/log_flags.o 00:02:23.303 CC lib/log/log_deprecated.o 00:02:23.303 CC lib/ut_mock/mock.o 00:02:23.303 CC lib/ut/ut.o 00:02:23.303 LIB libspdk_ut.a 00:02:23.303 LIB libspdk_ut_mock.a 00:02:23.303 LIB libspdk_log.a 00:02:23.303 SO libspdk_ut.so.1.0 00:02:23.303 SO libspdk_ut_mock.so.5.0 00:02:23.303 SO libspdk_log.so.6.1 00:02:23.303 SYMLINK libspdk_ut.so 00:02:23.303 SYMLINK libspdk_ut_mock.so 00:02:23.303 SYMLINK libspdk_log.so 00:02:23.562 CC lib/util/base64.o 00:02:23.562 CC lib/dma/dma.o 00:02:23.562 CC lib/util/cpuset.o 00:02:23.562 CC lib/util/bit_array.o 00:02:23.562 CC lib/util/crc16.o 00:02:23.562 CC lib/util/crc32.o 00:02:23.562 CC lib/ioat/ioat.o 00:02:23.562 CC lib/util/crc32c.o 00:02:23.562 CC lib/util/crc32_ieee.o 00:02:23.562 CC lib/util/fd.o 00:02:23.562 CC lib/util/crc64.o 00:02:23.562 CXX lib/trace_parser/trace.o 00:02:23.562 CC lib/util/dif.o 00:02:23.562 CC lib/util/file.o 00:02:23.562 CC lib/util/hexlify.o 00:02:23.562 CC lib/util/iov.o 00:02:23.562 CC lib/util/math.o 00:02:23.562 CC lib/util/pipe.o 00:02:23.562 CC lib/util/strerror_tls.o 00:02:23.562 CC lib/util/string.o 00:02:23.562 CC lib/util/uuid.o 00:02:23.562 CC lib/util/fd_group.o 00:02:23.562 CC lib/util/xor.o 00:02:23.562 CC lib/util/zipf.o 00:02:23.562 CC lib/vfio_user/host/vfio_user_pci.o 00:02:23.562 CC lib/vfio_user/host/vfio_user.o 00:02:23.821 LIB libspdk_dma.a 00:02:23.821 SO libspdk_dma.so.3.0 00:02:23.821 SYMLINK libspdk_dma.so 00:02:23.821 LIB libspdk_ioat.a 00:02:23.821 SO libspdk_ioat.so.6.0 00:02:23.821 LIB libspdk_vfio_user.a 00:02:23.821 SYMLINK libspdk_ioat.so 00:02:24.081 SO libspdk_vfio_user.so.4.0 00:02:24.081 SYMLINK libspdk_vfio_user.so 00:02:24.081 LIB libspdk_util.a 00:02:24.081 SO libspdk_util.so.8.0 00:02:24.340 SYMLINK libspdk_util.so 00:02:24.340 LIB libspdk_trace_parser.a 00:02:24.601 SO libspdk_trace_parser.so.4.0 00:02:24.601 CC lib/json/json_util.o 00:02:24.601 CC lib/json/json_parse.o 00:02:24.601 CC lib/json/json_write.o 00:02:24.601 CC lib/vmd/vmd.o 00:02:24.601 CC lib/vmd/led.o 00:02:24.601 CC lib/env_dpdk/env.o 00:02:24.601 CC lib/rdma/common.o 00:02:24.601 CC lib/env_dpdk/memory.o 00:02:24.601 CC lib/rdma/rdma_verbs.o 00:02:24.601 CC lib/env_dpdk/pci.o 00:02:24.601 CC lib/idxd/idxd.o 00:02:24.601 CC lib/conf/conf.o 00:02:24.601 CC lib/env_dpdk/init.o 00:02:24.601 CC lib/idxd/idxd_user.o 00:02:24.601 CC lib/idxd/idxd_kernel.o 00:02:24.601 CC lib/env_dpdk/threads.o 00:02:24.601 CC lib/env_dpdk/pci_ioat.o 00:02:24.601 CC lib/env_dpdk/pci_virtio.o 00:02:24.601 CC lib/env_dpdk/pci_vmd.o 00:02:24.601 CC lib/env_dpdk/pci_idxd.o 00:02:24.601 CC lib/env_dpdk/pci_event.o 00:02:24.601 CC lib/env_dpdk/sigbus_handler.o 00:02:24.601 CC lib/env_dpdk/pci_dpdk.o 00:02:24.601 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:24.601 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:24.601 SYMLINK libspdk_trace_parser.so 00:02:24.860 LIB libspdk_conf.a 00:02:24.860 SO libspdk_conf.so.5.0 00:02:24.860 LIB libspdk_json.a 00:02:24.860 LIB libspdk_rdma.a 00:02:24.860 SO libspdk_json.so.5.1 00:02:24.860 SO libspdk_rdma.so.5.0 00:02:24.860 SYMLINK libspdk_conf.so 00:02:24.860 SYMLINK libspdk_json.so 00:02:24.860 SYMLINK libspdk_rdma.so 00:02:25.120 LIB libspdk_idxd.a 00:02:25.120 SO libspdk_idxd.so.11.0 00:02:25.120 CC lib/jsonrpc/jsonrpc_server.o 00:02:25.120 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:25.120 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:25.120 CC lib/jsonrpc/jsonrpc_client.o 00:02:25.120 LIB libspdk_vmd.a 00:02:25.120 SYMLINK libspdk_idxd.so 00:02:25.120 SO libspdk_vmd.so.5.0 00:02:25.379 SYMLINK libspdk_vmd.so 00:02:25.379 LIB libspdk_jsonrpc.a 00:02:25.379 SO libspdk_jsonrpc.so.5.1 00:02:25.639 SYMLINK libspdk_jsonrpc.so 00:02:25.639 CC lib/rpc/rpc.o 00:02:25.898 LIB libspdk_env_dpdk.a 00:02:25.898 LIB libspdk_rpc.a 00:02:25.898 SO libspdk_env_dpdk.so.13.0 00:02:25.898 SO libspdk_rpc.so.5.0 00:02:26.158 SYMLINK libspdk_rpc.so 00:02:26.158 SYMLINK libspdk_env_dpdk.so 00:02:26.158 CC lib/notify/notify.o 00:02:26.158 CC lib/notify/notify_rpc.o 00:02:26.158 CC lib/sock/sock.o 00:02:26.158 CC lib/trace/trace.o 00:02:26.158 CC lib/sock/sock_rpc.o 00:02:26.158 CC lib/trace/trace_rpc.o 00:02:26.158 CC lib/trace/trace_flags.o 00:02:26.418 LIB libspdk_notify.a 00:02:26.419 SO libspdk_notify.so.5.0 00:02:26.419 LIB libspdk_trace.a 00:02:26.419 SYMLINK libspdk_notify.so 00:02:26.419 SO libspdk_trace.so.9.0 00:02:26.679 SYMLINK libspdk_trace.so 00:02:26.679 LIB libspdk_sock.a 00:02:26.679 SO libspdk_sock.so.8.0 00:02:26.679 SYMLINK libspdk_sock.so 00:02:26.939 CC lib/thread/thread.o 00:02:26.939 CC lib/thread/iobuf.o 00:02:26.939 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:26.939 CC lib/nvme/nvme_ctrlr.o 00:02:26.939 CC lib/nvme/nvme_fabric.o 00:02:26.939 CC lib/nvme/nvme_ns.o 00:02:26.939 CC lib/nvme/nvme_ns_cmd.o 00:02:26.939 CC lib/nvme/nvme_pcie_common.o 00:02:26.939 CC lib/nvme/nvme_pcie.o 00:02:26.939 CC lib/nvme/nvme_quirks.o 00:02:26.939 CC lib/nvme/nvme_qpair.o 00:02:26.939 CC lib/nvme/nvme.o 00:02:26.939 CC lib/nvme/nvme_transport.o 00:02:26.939 CC lib/nvme/nvme_discovery.o 00:02:26.939 CC lib/nvme/nvme_tcp.o 00:02:26.939 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:26.939 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:26.939 CC lib/nvme/nvme_opal.o 00:02:26.939 CC lib/nvme/nvme_io_msg.o 00:02:26.939 CC lib/nvme/nvme_poll_group.o 00:02:26.939 CC lib/nvme/nvme_zns.o 00:02:26.939 CC lib/nvme/nvme_cuse.o 00:02:26.939 CC lib/nvme/nvme_vfio_user.o 00:02:26.939 CC lib/nvme/nvme_rdma.o 00:02:28.322 LIB libspdk_thread.a 00:02:28.322 SO libspdk_thread.so.9.0 00:02:28.583 SYMLINK libspdk_thread.so 00:02:28.583 CC lib/virtio/virtio.o 00:02:28.583 CC lib/blob/zeroes.o 00:02:28.583 CC lib/blob/request.o 00:02:28.583 CC lib/blob/blobstore.o 00:02:28.583 CC lib/init/json_config.o 00:02:28.583 CC lib/init/subsystem.o 00:02:28.583 CC lib/virtio/virtio_vhost_user.o 00:02:28.583 CC lib/virtio/virtio_vfio_user.o 00:02:28.583 CC lib/init/subsystem_rpc.o 00:02:28.583 CC lib/virtio/virtio_pci.o 00:02:28.583 CC lib/blob/blob_bs_dev.o 00:02:28.583 CC lib/init/rpc.o 00:02:28.583 CC lib/accel/accel.o 00:02:28.583 CC lib/accel/accel_rpc.o 00:02:28.583 CC lib/accel/accel_sw.o 00:02:28.843 LIB libspdk_init.a 00:02:28.843 SO libspdk_init.so.4.0 00:02:29.103 LIB libspdk_virtio.a 00:02:29.103 SYMLINK libspdk_init.so 00:02:29.103 SO libspdk_virtio.so.6.0 00:02:29.103 LIB libspdk_nvme.a 00:02:29.103 SYMLINK libspdk_virtio.so 00:02:29.103 SO libspdk_nvme.so.12.0 00:02:29.363 CC lib/event/app.o 00:02:29.363 CC lib/event/reactor.o 00:02:29.363 CC lib/event/log_rpc.o 00:02:29.363 CC lib/event/app_rpc.o 00:02:29.363 CC lib/event/scheduler_static.o 00:02:29.622 SYMLINK libspdk_nvme.so 00:02:29.622 LIB libspdk_event.a 00:02:29.622 LIB libspdk_accel.a 00:02:29.622 SO libspdk_event.so.12.0 00:02:29.622 SO libspdk_accel.so.14.0 00:02:29.882 SYMLINK libspdk_event.so 00:02:29.882 SYMLINK libspdk_accel.so 00:02:30.141 CC lib/bdev/bdev.o 00:02:30.141 CC lib/bdev/bdev_rpc.o 00:02:30.141 CC lib/bdev/bdev_zone.o 00:02:30.141 CC lib/bdev/part.o 00:02:30.141 CC lib/bdev/scsi_nvme.o 00:02:31.517 LIB libspdk_blob.a 00:02:31.517 SO libspdk_blob.so.10.1 00:02:31.517 SYMLINK libspdk_blob.so 00:02:31.776 CC lib/blobfs/blobfs.o 00:02:31.776 CC lib/blobfs/tree.o 00:02:31.776 CC lib/lvol/lvol.o 00:02:32.712 LIB libspdk_bdev.a 00:02:32.712 LIB libspdk_blobfs.a 00:02:32.712 SO libspdk_blobfs.so.9.0 00:02:32.712 SO libspdk_bdev.so.14.0 00:02:32.712 LIB libspdk_lvol.a 00:02:32.712 SO libspdk_lvol.so.9.1 00:02:32.712 SYMLINK libspdk_blobfs.so 00:02:32.712 SYMLINK libspdk_bdev.so 00:02:32.712 SYMLINK libspdk_lvol.so 00:02:32.971 CC lib/ftl/ftl_core.o 00:02:32.971 CC lib/ftl/ftl_init.o 00:02:32.971 CC lib/ftl/ftl_layout.o 00:02:32.971 CC lib/ftl/ftl_debug.o 00:02:32.971 CC lib/ublk/ublk.o 00:02:32.971 CC lib/ftl/ftl_io.o 00:02:32.971 CC lib/ublk/ublk_rpc.o 00:02:32.971 CC lib/ftl/ftl_sb.o 00:02:32.971 CC lib/ftl/ftl_l2p_flat.o 00:02:32.971 CC lib/ftl/ftl_l2p.o 00:02:32.971 CC lib/ftl/ftl_nv_cache.o 00:02:32.971 CC lib/ftl/ftl_band.o 00:02:32.971 CC lib/ftl/ftl_band_ops.o 00:02:32.971 CC lib/ftl/ftl_writer.o 00:02:32.971 CC lib/ftl/ftl_rq.o 00:02:32.971 CC lib/ftl/ftl_reloc.o 00:02:32.971 CC lib/ftl/ftl_l2p_cache.o 00:02:32.971 CC lib/ftl/ftl_p2l.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:32.971 CC lib/scsi/dev.o 00:02:32.971 CC lib/nvmf/ctrlr.o 00:02:32.971 CC lib/nbd/nbd.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:32.971 CC lib/nbd/nbd_rpc.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:32.971 CC lib/nvmf/ctrlr_discovery.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:32.971 CC lib/scsi/port.o 00:02:32.971 CC lib/nvmf/nvmf.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:32.971 CC lib/scsi/lun.o 00:02:32.971 CC lib/nvmf/ctrlr_bdev.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:32.971 CC lib/nvmf/subsystem.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:32.971 CC lib/scsi/scsi.o 00:02:32.971 CC lib/nvmf/nvmf_rpc.o 00:02:32.971 CC lib/scsi/scsi_bdev.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:32.971 CC lib/nvmf/transport.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:32.971 CC lib/scsi/scsi_pr.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:32.971 CC lib/nvmf/tcp.o 00:02:32.971 CC lib/scsi/scsi_rpc.o 00:02:32.971 CC lib/nvmf/rdma.o 00:02:32.971 CC lib/scsi/task.o 00:02:32.971 CC lib/ftl/utils/ftl_md.o 00:02:32.971 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:32.971 CC lib/ftl/utils/ftl_conf.o 00:02:32.971 CC lib/ftl/utils/ftl_mempool.o 00:02:32.971 CC lib/ftl/utils/ftl_bitmap.o 00:02:32.971 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:32.971 CC lib/ftl/utils/ftl_property.o 00:02:32.971 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:32.971 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:32.971 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:32.971 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:32.971 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:32.971 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:32.971 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:32.971 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:32.971 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:32.971 CC lib/ftl/base/ftl_base_dev.o 00:02:32.971 CC lib/ftl/base/ftl_base_bdev.o 00:02:32.971 CC lib/ftl/ftl_trace.o 00:02:33.538 LIB libspdk_nbd.a 00:02:33.538 SO libspdk_nbd.so.6.0 00:02:33.538 SYMLINK libspdk_nbd.so 00:02:33.538 LIB libspdk_scsi.a 00:02:33.796 SO libspdk_scsi.so.8.0 00:02:33.796 LIB libspdk_ublk.a 00:02:33.796 SO libspdk_ublk.so.2.0 00:02:33.796 SYMLINK libspdk_scsi.so 00:02:33.796 SYMLINK libspdk_ublk.so 00:02:34.054 CC lib/iscsi/conn.o 00:02:34.054 CC lib/vhost/vhost.o 00:02:34.054 CC lib/iscsi/init_grp.o 00:02:34.054 CC lib/iscsi/iscsi.o 00:02:34.054 CC lib/iscsi/md5.o 00:02:34.054 CC lib/vhost/vhost_rpc.o 00:02:34.054 CC lib/iscsi/portal_grp.o 00:02:34.054 CC lib/iscsi/param.o 00:02:34.054 CC lib/vhost/vhost_scsi.o 00:02:34.054 CC lib/vhost/vhost_blk.o 00:02:34.054 CC lib/iscsi/tgt_node.o 00:02:34.054 CC lib/vhost/rte_vhost_user.o 00:02:34.054 CC lib/iscsi/iscsi_subsystem.o 00:02:34.054 CC lib/iscsi/iscsi_rpc.o 00:02:34.054 CC lib/iscsi/task.o 00:02:34.054 LIB libspdk_ftl.a 00:02:34.312 SO libspdk_ftl.so.8.0 00:02:34.570 SYMLINK libspdk_ftl.so 00:02:35.138 LIB libspdk_vhost.a 00:02:35.138 LIB libspdk_nvmf.a 00:02:35.138 SO libspdk_vhost.so.7.1 00:02:35.138 SO libspdk_nvmf.so.17.0 00:02:35.138 SYMLINK libspdk_vhost.so 00:02:35.397 SYMLINK libspdk_nvmf.so 00:02:35.397 LIB libspdk_iscsi.a 00:02:35.397 SO libspdk_iscsi.so.7.0 00:02:35.656 SYMLINK libspdk_iscsi.so 00:02:35.916 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.916 CC module/accel/iaa/accel_iaa.o 00:02:35.916 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.916 CC module/accel/error/accel_error.o 00:02:35.916 CC module/accel/dsa/accel_dsa.o 00:02:35.916 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.916 CC module/accel/error/accel_error_rpc.o 00:02:35.916 CC module/blob/bdev/blob_bdev.o 00:02:35.916 CC module/accel/ioat/accel_ioat.o 00:02:35.916 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.916 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.916 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.916 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.916 CC module/sock/posix/posix.o 00:02:36.174 LIB libspdk_env_dpdk_rpc.a 00:02:36.174 SO libspdk_env_dpdk_rpc.so.5.0 00:02:36.174 SYMLINK libspdk_env_dpdk_rpc.so 00:02:36.174 LIB libspdk_accel_error.a 00:02:36.174 LIB libspdk_scheduler_gscheduler.a 00:02:36.174 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.174 SO libspdk_accel_error.so.1.0 00:02:36.174 LIB libspdk_accel_ioat.a 00:02:36.174 SO libspdk_scheduler_gscheduler.so.3.0 00:02:36.174 LIB libspdk_scheduler_dynamic.a 00:02:36.174 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:36.174 LIB libspdk_accel_iaa.a 00:02:36.174 SO libspdk_accel_ioat.so.5.0 00:02:36.174 SO libspdk_scheduler_dynamic.so.3.0 00:02:36.174 LIB libspdk_accel_dsa.a 00:02:36.174 SYMLINK libspdk_accel_error.so 00:02:36.174 SO libspdk_accel_iaa.so.2.0 00:02:36.174 SYMLINK libspdk_scheduler_gscheduler.so 00:02:36.174 LIB libspdk_blob_bdev.a 00:02:36.433 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:36.433 SO libspdk_accel_dsa.so.4.0 00:02:36.433 SO libspdk_blob_bdev.so.10.1 00:02:36.433 SYMLINK libspdk_accel_ioat.so 00:02:36.433 SYMLINK libspdk_scheduler_dynamic.so 00:02:36.433 SYMLINK libspdk_accel_iaa.so 00:02:36.433 SYMLINK libspdk_accel_dsa.so 00:02:36.433 SYMLINK libspdk_blob_bdev.so 00:02:36.692 CC module/bdev/nvme/bdev_nvme.o 00:02:36.692 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:36.692 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.692 CC module/bdev/error/vbdev_error.o 00:02:36.692 CC module/bdev/nvme/nvme_rpc.o 00:02:36.692 CC module/bdev/nvme/bdev_mdns_client.o 00:02:36.692 CC module/bdev/nvme/vbdev_opal.o 00:02:36.692 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:36.692 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.692 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:36.692 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.692 CC module/bdev/malloc/bdev_malloc.o 00:02:36.692 CC module/bdev/gpt/gpt.o 00:02:36.692 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:36.692 CC module/bdev/delay/vbdev_delay.o 00:02:36.692 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.692 CC module/bdev/null/bdev_null.o 00:02:36.692 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.692 CC module/bdev/null/bdev_null_rpc.o 00:02:36.692 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:36.692 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:36.692 CC module/bdev/passthru/vbdev_passthru.o 00:02:36.692 CC module/bdev/iscsi/bdev_iscsi.o 00:02:36.692 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:36.692 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:36.692 CC module/bdev/raid/bdev_raid.o 00:02:36.692 CC module/bdev/raid/bdev_raid_rpc.o 00:02:36.692 CC module/bdev/aio/bdev_aio_rpc.o 00:02:36.692 CC module/bdev/aio/bdev_aio.o 00:02:36.692 CC module/bdev/raid/bdev_raid_sb.o 00:02:36.692 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:36.692 CC module/bdev/raid/raid0.o 00:02:36.692 CC module/bdev/raid/raid1.o 00:02:36.692 CC module/bdev/ftl/bdev_ftl.o 00:02:36.692 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:36.693 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:36.693 CC module/bdev/raid/concat.o 00:02:36.693 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:36.693 CC module/bdev/split/vbdev_split.o 00:02:36.693 CC module/bdev/split/vbdev_split_rpc.o 00:02:36.693 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:36.693 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:36.693 LIB libspdk_sock_posix.a 00:02:36.693 SO libspdk_sock_posix.so.5.0 00:02:36.951 SYMLINK libspdk_sock_posix.so 00:02:36.951 LIB libspdk_blobfs_bdev.a 00:02:36.951 SO libspdk_blobfs_bdev.so.5.0 00:02:36.951 LIB libspdk_bdev_null.a 00:02:36.951 LIB libspdk_bdev_passthru.a 00:02:36.951 LIB libspdk_bdev_split.a 00:02:36.951 LIB libspdk_bdev_gpt.a 00:02:36.951 LIB libspdk_bdev_error.a 00:02:36.951 SYMLINK libspdk_blobfs_bdev.so 00:02:37.210 SO libspdk_bdev_split.so.5.0 00:02:37.210 SO libspdk_bdev_null.so.5.0 00:02:37.210 SO libspdk_bdev_passthru.so.5.0 00:02:37.210 SO libspdk_bdev_gpt.so.5.0 00:02:37.210 LIB libspdk_bdev_ftl.a 00:02:37.210 LIB libspdk_bdev_aio.a 00:02:37.210 SO libspdk_bdev_error.so.5.0 00:02:37.210 LIB libspdk_bdev_zone_block.a 00:02:37.210 LIB libspdk_bdev_malloc.a 00:02:37.210 SO libspdk_bdev_aio.so.5.0 00:02:37.210 SO libspdk_bdev_ftl.so.5.0 00:02:37.210 SYMLINK libspdk_bdev_split.so 00:02:37.210 SYMLINK libspdk_bdev_passthru.so 00:02:37.210 LIB libspdk_bdev_delay.a 00:02:37.210 SYMLINK libspdk_bdev_gpt.so 00:02:37.210 SYMLINK libspdk_bdev_null.so 00:02:37.210 LIB libspdk_bdev_iscsi.a 00:02:37.210 SO libspdk_bdev_zone_block.so.5.0 00:02:37.210 SO libspdk_bdev_malloc.so.5.0 00:02:37.210 SYMLINK libspdk_bdev_error.so 00:02:37.210 SO libspdk_bdev_delay.so.5.0 00:02:37.210 SO libspdk_bdev_iscsi.so.5.0 00:02:37.210 SYMLINK libspdk_bdev_aio.so 00:02:37.210 SYMLINK libspdk_bdev_ftl.so 00:02:37.210 SYMLINK libspdk_bdev_zone_block.so 00:02:37.210 SYMLINK libspdk_bdev_malloc.so 00:02:37.210 LIB libspdk_bdev_lvol.a 00:02:37.210 SYMLINK libspdk_bdev_delay.so 00:02:37.210 SYMLINK libspdk_bdev_iscsi.so 00:02:37.210 SO libspdk_bdev_lvol.so.5.0 00:02:37.210 LIB libspdk_bdev_virtio.a 00:02:37.469 SYMLINK libspdk_bdev_lvol.so 00:02:37.469 SO libspdk_bdev_virtio.so.5.0 00:02:37.469 SYMLINK libspdk_bdev_virtio.so 00:02:37.729 LIB libspdk_bdev_raid.a 00:02:37.729 SO libspdk_bdev_raid.so.5.0 00:02:37.729 SYMLINK libspdk_bdev_raid.so 00:02:38.297 LIB libspdk_bdev_nvme.a 00:02:38.297 SO libspdk_bdev_nvme.so.6.0 00:02:38.297 SYMLINK libspdk_bdev_nvme.so 00:02:38.865 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:38.865 CC module/event/subsystems/iobuf/iobuf.o 00:02:38.865 CC module/event/subsystems/vmd/vmd.o 00:02:38.865 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:38.865 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:38.865 CC module/event/subsystems/scheduler/scheduler.o 00:02:38.865 CC module/event/subsystems/sock/sock.o 00:02:38.865 LIB libspdk_event_vhost_blk.a 00:02:38.865 LIB libspdk_event_sock.a 00:02:38.865 LIB libspdk_event_scheduler.a 00:02:38.865 SO libspdk_event_vhost_blk.so.2.0 00:02:38.865 LIB libspdk_event_vmd.a 00:02:39.124 LIB libspdk_event_iobuf.a 00:02:39.124 SO libspdk_event_sock.so.4.0 00:02:39.124 SO libspdk_event_scheduler.so.3.0 00:02:39.124 SO libspdk_event_vmd.so.5.0 00:02:39.124 SO libspdk_event_iobuf.so.2.0 00:02:39.124 SYMLINK libspdk_event_vhost_blk.so 00:02:39.124 SYMLINK libspdk_event_sock.so 00:02:39.124 SYMLINK libspdk_event_scheduler.so 00:02:39.124 SYMLINK libspdk_event_vmd.so 00:02:39.124 SYMLINK libspdk_event_iobuf.so 00:02:39.384 CC module/event/subsystems/accel/accel.o 00:02:39.384 LIB libspdk_event_accel.a 00:02:39.644 SO libspdk_event_accel.so.5.0 00:02:39.644 SYMLINK libspdk_event_accel.so 00:02:39.903 CC module/event/subsystems/bdev/bdev.o 00:02:39.903 LIB libspdk_event_bdev.a 00:02:39.903 SO libspdk_event_bdev.so.5.0 00:02:40.163 SYMLINK libspdk_event_bdev.so 00:02:40.163 CC module/event/subsystems/nbd/nbd.o 00:02:40.422 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:40.422 CC module/event/subsystems/scsi/scsi.o 00:02:40.422 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:40.422 CC module/event/subsystems/ublk/ublk.o 00:02:40.422 LIB libspdk_event_nbd.a 00:02:40.422 LIB libspdk_event_ublk.a 00:02:40.422 LIB libspdk_event_scsi.a 00:02:40.422 SO libspdk_event_nbd.so.5.0 00:02:40.422 SO libspdk_event_ublk.so.2.0 00:02:40.422 SO libspdk_event_scsi.so.5.0 00:02:40.422 SYMLINK libspdk_event_nbd.so 00:02:40.422 LIB libspdk_event_nvmf.a 00:02:40.681 SYMLINK libspdk_event_ublk.so 00:02:40.681 SYMLINK libspdk_event_scsi.so 00:02:40.681 SO libspdk_event_nvmf.so.5.0 00:02:40.681 SYMLINK libspdk_event_nvmf.so 00:02:40.681 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:40.681 CC module/event/subsystems/iscsi/iscsi.o 00:02:40.941 LIB libspdk_event_vhost_scsi.a 00:02:40.941 LIB libspdk_event_iscsi.a 00:02:40.941 SO libspdk_event_vhost_scsi.so.2.0 00:02:40.941 SO libspdk_event_iscsi.so.5.0 00:02:40.941 SYMLINK libspdk_event_vhost_scsi.so 00:02:40.941 SYMLINK libspdk_event_iscsi.so 00:02:41.200 SO libspdk.so.5.0 00:02:41.200 SYMLINK libspdk.so 00:02:41.459 CC app/spdk_lspci/spdk_lspci.o 00:02:41.459 CC app/spdk_nvme_perf/perf.o 00:02:41.459 CC app/spdk_top/spdk_top.o 00:02:41.459 CXX app/trace/trace.o 00:02:41.459 CC app/spdk_nvme_discover/discovery_aer.o 00:02:41.459 TEST_HEADER include/spdk/accel.h 00:02:41.459 CC app/trace_record/trace_record.o 00:02:41.459 TEST_HEADER include/spdk/accel_module.h 00:02:41.459 TEST_HEADER include/spdk/assert.h 00:02:41.459 CC app/spdk_nvme_identify/identify.o 00:02:41.459 TEST_HEADER include/spdk/barrier.h 00:02:41.459 TEST_HEADER include/spdk/bdev.h 00:02:41.459 TEST_HEADER include/spdk/bdev_zone.h 00:02:41.459 TEST_HEADER include/spdk/base64.h 00:02:41.459 TEST_HEADER include/spdk/bdev_module.h 00:02:41.459 CC test/rpc_client/rpc_client_test.o 00:02:41.459 TEST_HEADER include/spdk/bit_array.h 00:02:41.459 TEST_HEADER include/spdk/blob_bdev.h 00:02:41.459 TEST_HEADER include/spdk/bit_pool.h 00:02:41.459 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:41.459 TEST_HEADER include/spdk/blobfs.h 00:02:41.459 TEST_HEADER include/spdk/blob.h 00:02:41.459 TEST_HEADER include/spdk/conf.h 00:02:41.459 TEST_HEADER include/spdk/config.h 00:02:41.459 TEST_HEADER include/spdk/cpuset.h 00:02:41.459 TEST_HEADER include/spdk/crc16.h 00:02:41.459 TEST_HEADER include/spdk/crc32.h 00:02:41.459 TEST_HEADER include/spdk/crc64.h 00:02:41.459 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:41.459 TEST_HEADER include/spdk/dif.h 00:02:41.459 TEST_HEADER include/spdk/dma.h 00:02:41.459 CC app/iscsi_tgt/iscsi_tgt.o 00:02:41.459 TEST_HEADER include/spdk/env_dpdk.h 00:02:41.459 TEST_HEADER include/spdk/endian.h 00:02:41.459 TEST_HEADER include/spdk/env.h 00:02:41.459 TEST_HEADER include/spdk/event.h 00:02:41.459 CC app/spdk_dd/spdk_dd.o 00:02:41.459 TEST_HEADER include/spdk/fd_group.h 00:02:41.459 TEST_HEADER include/spdk/file.h 00:02:41.459 TEST_HEADER include/spdk/fd.h 00:02:41.459 TEST_HEADER include/spdk/ftl.h 00:02:41.459 CC app/nvmf_tgt/nvmf_main.o 00:02:41.459 TEST_HEADER include/spdk/gpt_spec.h 00:02:41.459 TEST_HEADER include/spdk/hexlify.h 00:02:41.459 TEST_HEADER include/spdk/histogram_data.h 00:02:41.459 TEST_HEADER include/spdk/idxd_spec.h 00:02:41.459 CC app/vhost/vhost.o 00:02:41.459 TEST_HEADER include/spdk/idxd.h 00:02:41.459 TEST_HEADER include/spdk/ioat.h 00:02:41.459 TEST_HEADER include/spdk/init.h 00:02:41.459 TEST_HEADER include/spdk/ioat_spec.h 00:02:41.459 TEST_HEADER include/spdk/likely.h 00:02:41.459 TEST_HEADER include/spdk/iscsi_spec.h 00:02:41.459 TEST_HEADER include/spdk/jsonrpc.h 00:02:41.459 TEST_HEADER include/spdk/json.h 00:02:41.459 TEST_HEADER include/spdk/log.h 00:02:41.459 TEST_HEADER include/spdk/nbd.h 00:02:41.459 TEST_HEADER include/spdk/memory.h 00:02:41.459 TEST_HEADER include/spdk/mmio.h 00:02:41.459 TEST_HEADER include/spdk/lvol.h 00:02:41.459 TEST_HEADER include/spdk/notify.h 00:02:41.459 TEST_HEADER include/spdk/nvme_intel.h 00:02:41.459 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:41.459 TEST_HEADER include/spdk/nvme.h 00:02:41.459 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:41.459 TEST_HEADER include/spdk/nvme_spec.h 00:02:41.459 TEST_HEADER include/spdk/nvme_zns.h 00:02:41.459 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:41.459 TEST_HEADER include/spdk/nvmf_spec.h 00:02:41.459 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:41.459 TEST_HEADER include/spdk/nvmf_transport.h 00:02:41.459 TEST_HEADER include/spdk/nvmf.h 00:02:41.459 TEST_HEADER include/spdk/opal.h 00:02:41.459 CC app/spdk_tgt/spdk_tgt.o 00:02:41.459 TEST_HEADER include/spdk/opal_spec.h 00:02:41.459 TEST_HEADER include/spdk/pci_ids.h 00:02:41.459 TEST_HEADER include/spdk/pipe.h 00:02:41.459 TEST_HEADER include/spdk/reduce.h 00:02:41.459 TEST_HEADER include/spdk/queue.h 00:02:41.459 TEST_HEADER include/spdk/scheduler.h 00:02:41.459 TEST_HEADER include/spdk/rpc.h 00:02:41.460 TEST_HEADER include/spdk/scsi.h 00:02:41.460 TEST_HEADER include/spdk/sock.h 00:02:41.460 TEST_HEADER include/spdk/stdinc.h 00:02:41.460 TEST_HEADER include/spdk/scsi_spec.h 00:02:41.460 TEST_HEADER include/spdk/string.h 00:02:41.460 TEST_HEADER include/spdk/thread.h 00:02:41.460 TEST_HEADER include/spdk/trace.h 00:02:41.460 TEST_HEADER include/spdk/trace_parser.h 00:02:41.460 TEST_HEADER include/spdk/tree.h 00:02:41.460 TEST_HEADER include/spdk/ublk.h 00:02:41.460 TEST_HEADER include/spdk/util.h 00:02:41.460 TEST_HEADER include/spdk/uuid.h 00:02:41.460 TEST_HEADER include/spdk/version.h 00:02:41.460 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:41.460 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:41.460 TEST_HEADER include/spdk/vhost.h 00:02:41.460 TEST_HEADER include/spdk/vmd.h 00:02:41.460 TEST_HEADER include/spdk/zipf.h 00:02:41.460 TEST_HEADER include/spdk/xor.h 00:02:41.460 CC examples/nvme/abort/abort.o 00:02:41.460 CC examples/ioat/verify/verify.o 00:02:41.460 CXX test/cpp_headers/accel.o 00:02:41.460 CXX test/cpp_headers/accel_module.o 00:02:41.460 CXX test/cpp_headers/barrier.o 00:02:41.460 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:41.460 CXX test/cpp_headers/assert.o 00:02:41.460 CXX test/cpp_headers/base64.o 00:02:41.460 CC test/thread/poller_perf/poller_perf.o 00:02:41.460 CXX test/cpp_headers/bdev.o 00:02:41.460 CC examples/sock/hello_world/hello_sock.o 00:02:41.754 CXX test/cpp_headers/bdev_zone.o 00:02:41.754 CXX test/cpp_headers/bdev_module.o 00:02:41.754 CXX test/cpp_headers/bit_pool.o 00:02:41.754 CXX test/cpp_headers/bit_array.o 00:02:41.754 CXX test/cpp_headers/blobfs_bdev.o 00:02:41.754 CXX test/cpp_headers/blob_bdev.o 00:02:41.754 CXX test/cpp_headers/blobfs.o 00:02:41.754 CC examples/util/zipf/zipf.o 00:02:41.754 CXX test/cpp_headers/blob.o 00:02:41.754 CXX test/cpp_headers/conf.o 00:02:41.754 CC examples/vmd/led/led.o 00:02:41.754 CXX test/cpp_headers/crc32.o 00:02:41.754 CXX test/cpp_headers/config.o 00:02:41.754 CXX test/cpp_headers/crc64.o 00:02:41.754 CXX test/cpp_headers/crc16.o 00:02:41.754 CXX test/cpp_headers/cpuset.o 00:02:41.754 CC examples/accel/perf/accel_perf.o 00:02:41.754 CXX test/cpp_headers/dif.o 00:02:41.754 CXX test/cpp_headers/dma.o 00:02:41.754 CC examples/nvme/hello_world/hello_world.o 00:02:41.754 CXX test/cpp_headers/env_dpdk.o 00:02:41.754 CXX test/cpp_headers/endian.o 00:02:41.754 CXX test/cpp_headers/env.o 00:02:41.754 CC examples/ioat/perf/perf.o 00:02:41.754 CC examples/nvme/arbitration/arbitration.o 00:02:41.754 CXX test/cpp_headers/event.o 00:02:41.754 CXX test/cpp_headers/fd_group.o 00:02:41.754 CXX test/cpp_headers/fd.o 00:02:41.754 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:41.754 CXX test/cpp_headers/ftl.o 00:02:41.754 CXX test/cpp_headers/file.o 00:02:41.754 CXX test/cpp_headers/gpt_spec.o 00:02:41.754 CC examples/nvme/hotplug/hotplug.o 00:02:41.754 CC examples/idxd/perf/perf.o 00:02:41.754 CC examples/blob/hello_world/hello_blob.o 00:02:41.754 CXX test/cpp_headers/hexlify.o 00:02:41.754 CXX test/cpp_headers/histogram_data.o 00:02:41.754 CC examples/vmd/lsvmd/lsvmd.o 00:02:41.754 CXX test/cpp_headers/idxd.o 00:02:41.754 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:41.754 CC examples/nvme/reconnect/reconnect.o 00:02:41.754 CC examples/bdev/hello_world/hello_bdev.o 00:02:41.754 CXX test/cpp_headers/idxd_spec.o 00:02:41.754 CC test/nvme/boot_partition/boot_partition.o 00:02:41.754 CXX test/cpp_headers/init.o 00:02:41.754 CXX test/cpp_headers/ioat.o 00:02:41.754 CC test/nvme/reserve/reserve.o 00:02:41.754 CC test/app/jsoncat/jsoncat.o 00:02:41.754 CC test/nvme/startup/startup.o 00:02:41.754 CC app/fio/nvme/fio_plugin.o 00:02:41.754 CC test/nvme/aer/aer.o 00:02:41.754 CC examples/nvmf/nvmf/nvmf.o 00:02:41.754 CC test/nvme/sgl/sgl.o 00:02:41.754 CC test/nvme/err_injection/err_injection.o 00:02:41.754 CC test/event/reactor/reactor.o 00:02:41.754 CC test/nvme/simple_copy/simple_copy.o 00:02:41.754 CC test/nvme/fused_ordering/fused_ordering.o 00:02:41.754 CC test/env/vtophys/vtophys.o 00:02:41.754 CC test/nvme/connect_stress/connect_stress.o 00:02:41.754 CC test/nvme/overhead/overhead.o 00:02:41.754 CC test/nvme/compliance/nvme_compliance.o 00:02:41.754 CC examples/blob/cli/blobcli.o 00:02:41.754 CC test/nvme/reset/reset.o 00:02:41.754 CC test/nvme/fdp/fdp.o 00:02:41.754 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:41.754 CC test/env/pci/pci_ut.o 00:02:41.754 CC examples/bdev/bdevperf/bdevperf.o 00:02:41.754 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:41.754 CC test/env/memory/memory_ut.o 00:02:41.754 CC test/app/histogram_perf/histogram_perf.o 00:02:41.754 CC test/event/event_perf/event_perf.o 00:02:41.754 CC test/event/app_repeat/app_repeat.o 00:02:41.754 CC test/nvme/cuse/cuse.o 00:02:41.754 CC test/dma/test_dma/test_dma.o 00:02:41.754 CC app/fio/bdev/fio_plugin.o 00:02:41.754 CC test/nvme/e2edp/nvme_dp.o 00:02:41.754 CC test/app/stub/stub.o 00:02:41.754 CC test/blobfs/mkfs/mkfs.o 00:02:41.754 CC test/accel/dif/dif.o 00:02:41.754 CC test/bdev/bdevio/bdevio.o 00:02:41.754 CC test/event/reactor_perf/reactor_perf.o 00:02:41.754 CC examples/thread/thread/thread_ex.o 00:02:41.754 CC test/event/scheduler/scheduler.o 00:02:41.754 LINK spdk_lspci 00:02:41.754 CC test/app/bdev_svc/bdev_svc.o 00:02:41.754 CC test/env/mem_callbacks/mem_callbacks.o 00:02:42.064 CC test/lvol/esnap/esnap.o 00:02:42.064 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:42.064 LINK vhost 00:02:42.064 LINK iscsi_tgt 00:02:42.064 LINK rpc_client_test 00:02:42.064 LINK spdk_nvme_discover 00:02:42.064 LINK poller_perf 00:02:42.064 LINK spdk_trace_record 00:02:42.064 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:42.331 LINK pmr_persistence 00:02:42.331 LINK jsoncat 00:02:42.331 LINK reactor 00:02:42.331 LINK boot_partition 00:02:42.331 LINK nvmf_tgt 00:02:42.331 LINK interrupt_tgt 00:02:42.331 LINK err_injection 00:02:42.331 LINK startup 00:02:42.331 LINK ioat_perf 00:02:42.331 LINK stub 00:02:42.331 LINK lsvmd 00:02:42.331 LINK verify 00:02:42.331 LINK hello_sock 00:02:42.331 LINK led 00:02:42.331 CXX test/cpp_headers/ioat_spec.o 00:02:42.331 CXX test/cpp_headers/iscsi_spec.o 00:02:42.331 CXX test/cpp_headers/json.o 00:02:42.331 LINK event_perf 00:02:42.331 CXX test/cpp_headers/jsonrpc.o 00:02:42.331 LINK hotplug 00:02:42.331 CXX test/cpp_headers/likely.o 00:02:42.331 CXX test/cpp_headers/log.o 00:02:42.331 LINK zipf 00:02:42.331 CXX test/cpp_headers/lvol.o 00:02:42.331 CXX test/cpp_headers/mmio.o 00:02:42.331 CXX test/cpp_headers/memory.o 00:02:42.331 CXX test/cpp_headers/nvme.o 00:02:42.331 CXX test/cpp_headers/notify.o 00:02:42.331 CXX test/cpp_headers/nbd.o 00:02:42.331 CXX test/cpp_headers/nvme_intel.o 00:02:42.331 CXX test/cpp_headers/nvme_ocssd.o 00:02:42.331 LINK vtophys 00:02:42.331 LINK histogram_perf 00:02:42.331 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:42.331 LINK spdk_tgt 00:02:42.331 CXX test/cpp_headers/nvme_spec.o 00:02:42.331 LINK spdk_dd 00:02:42.331 LINK app_repeat 00:02:42.331 CXX test/cpp_headers/nvme_zns.o 00:02:42.331 CXX test/cpp_headers/nvmf_cmd.o 00:02:42.331 LINK cmb_copy 00:02:42.331 LINK env_dpdk_post_init 00:02:42.331 LINK fused_ordering 00:02:42.331 LINK reactor_perf 00:02:42.331 LINK aer 00:02:42.331 LINK hello_bdev 00:02:42.331 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:42.331 LINK sgl 00:02:42.331 LINK connect_stress 00:02:42.331 CXX test/cpp_headers/nvmf.o 00:02:42.331 LINK thread 00:02:42.331 CXX test/cpp_headers/nvmf_spec.o 00:02:42.331 CXX test/cpp_headers/nvmf_transport.o 00:02:42.591 LINK doorbell_aers 00:02:42.591 LINK reserve 00:02:42.591 CXX test/cpp_headers/opal.o 00:02:42.591 CXX test/cpp_headers/opal_spec.o 00:02:42.591 LINK mkfs 00:02:42.591 LINK bdev_svc 00:02:42.591 LINK hello_blob 00:02:42.591 LINK arbitration 00:02:42.591 CXX test/cpp_headers/pci_ids.o 00:02:42.591 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:42.591 LINK reset 00:02:42.591 CXX test/cpp_headers/pipe.o 00:02:42.591 LINK simple_copy 00:02:42.591 LINK nvme_compliance 00:02:42.591 CXX test/cpp_headers/queue.o 00:02:42.591 LINK hello_world 00:02:42.591 CXX test/cpp_headers/reduce.o 00:02:42.591 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:42.591 CXX test/cpp_headers/rpc.o 00:02:42.591 CXX test/cpp_headers/scheduler.o 00:02:42.591 LINK idxd_perf 00:02:42.591 CXX test/cpp_headers/scsi.o 00:02:42.591 CXX test/cpp_headers/scsi_spec.o 00:02:42.591 CXX test/cpp_headers/sock.o 00:02:42.591 CXX test/cpp_headers/stdinc.o 00:02:42.591 CXX test/cpp_headers/string.o 00:02:42.591 CXX test/cpp_headers/thread.o 00:02:42.591 CXX test/cpp_headers/trace.o 00:02:42.591 CXX test/cpp_headers/trace_parser.o 00:02:42.591 LINK reconnect 00:02:42.591 CXX test/cpp_headers/tree.o 00:02:42.591 CXX test/cpp_headers/ublk.o 00:02:42.591 CXX test/cpp_headers/util.o 00:02:42.591 CXX test/cpp_headers/version.o 00:02:42.591 CXX test/cpp_headers/uuid.o 00:02:42.591 CXX test/cpp_headers/vfio_user_pci.o 00:02:42.591 CXX test/cpp_headers/vfio_user_spec.o 00:02:42.591 CXX test/cpp_headers/vhost.o 00:02:42.591 LINK scheduler 00:02:42.591 CXX test/cpp_headers/vmd.o 00:02:42.591 LINK abort 00:02:42.591 CXX test/cpp_headers/xor.o 00:02:42.591 CXX test/cpp_headers/zipf.o 00:02:42.591 LINK nvmf 00:02:42.591 LINK bdevio 00:02:42.591 LINK dif 00:02:42.591 LINK nvme_dp 00:02:42.591 LINK overhead 00:02:42.591 LINK accel_perf 00:02:42.850 LINK fdp 00:02:42.850 LINK spdk_bdev 00:02:42.850 LINK pci_ut 00:02:42.850 LINK blobcli 00:02:42.850 LINK spdk_nvme 00:02:42.850 LINK nvme_fuzz 00:02:42.850 LINK spdk_trace 00:02:42.850 LINK test_dma 00:02:43.108 LINK spdk_nvme_perf 00:02:43.108 LINK spdk_nvme_identify 00:02:43.108 LINK nvme_manage 00:02:43.108 LINK spdk_top 00:02:43.108 LINK mem_callbacks 00:02:43.108 LINK vhost_fuzz 00:02:43.366 LINK bdevperf 00:02:43.366 LINK memory_ut 00:02:43.624 LINK cuse 00:02:44.192 LINK iscsi_fuzz 00:02:47.483 LINK esnap 00:02:47.483 00:02:47.483 real 0m47.426s 00:02:47.483 user 7m45.923s 00:02:47.483 sys 3m55.371s 00:02:47.483 14:48:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:47.483 14:48:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.483 ************************************ 00:02:47.483 END TEST make 00:02:47.483 ************************************ 00:02:47.483 14:48:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:47.483 14:48:06 -- nvmf/common.sh@7 -- # uname -s 00:02:47.483 14:48:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:47.483 14:48:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:47.483 14:48:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:47.483 14:48:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:47.483 14:48:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:47.483 14:48:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:47.483 14:48:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:47.483 14:48:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:47.483 14:48:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.483 14:48:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.483 14:48:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:02:47.483 14:48:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:02:47.483 14:48:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.483 14:48:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.483 14:48:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:47.483 14:48:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:47.483 14:48:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.483 14:48:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.483 14:48:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.483 14:48:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.483 14:48:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.483 14:48:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.483 14:48:06 -- paths/export.sh@5 -- # export PATH 00:02:47.483 14:48:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.483 14:48:06 -- nvmf/common.sh@46 -- # : 0 00:02:47.483 14:48:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:47.483 14:48:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:47.483 14:48:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:47.483 14:48:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.483 14:48:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.483 14:48:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:47.483 14:48:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:47.483 14:48:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:47.747 14:48:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.747 14:48:06 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.747 14:48:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.747 14:48:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.747 14:48:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:47.747 14:48:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.747 14:48:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:47.747 14:48:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.747 14:48:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.747 14:48:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.747 14:48:06 -- spdk/autotest.sh@48 -- # udevadm_pid=3018296 00:02:47.747 14:48:06 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:47.747 14:48:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.747 14:48:06 -- spdk/autotest.sh@54 -- # echo 3018298 00:02:47.747 14:48:06 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:47.747 14:48:06 -- spdk/autotest.sh@56 -- # echo 3018299 00:02:47.747 14:48:06 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:47.747 14:48:06 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:47.747 14:48:06 -- spdk/autotest.sh@60 -- # echo 3018300 00:02:47.747 14:48:06 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:47.747 14:48:06 -- spdk/autotest.sh@62 -- # echo 3018301 00:02:47.748 14:48:06 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:47.748 14:48:06 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:47.748 14:48:06 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:47.748 14:48:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:47.748 14:48:06 -- common/autotest_common.sh@10 -- # set +x 00:02:47.748 14:48:06 -- spdk/autotest.sh@70 -- # create_test_list 00:02:47.748 14:48:06 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:47.748 14:48:06 -- common/autotest_common.sh@10 -- # set +x 00:02:47.748 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:47.748 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:47.748 14:48:06 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:47.748 14:48:06 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.748 14:48:06 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.748 14:48:06 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:47.748 14:48:06 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.748 14:48:06 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:47.748 14:48:06 -- common/autotest_common.sh@1440 -- # uname 00:02:47.748 14:48:06 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:47.748 14:48:06 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:47.748 14:48:06 -- common/autotest_common.sh@1460 -- # uname 00:02:47.748 14:48:06 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:47.748 14:48:06 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:47.748 14:48:06 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:47.748 14:48:06 -- spdk/autotest.sh@83 -- # hash lcov 00:02:47.748 14:48:06 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:47.748 14:48:06 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:47.748 --rc lcov_branch_coverage=1 00:02:47.748 --rc lcov_function_coverage=1 00:02:47.748 --rc genhtml_branch_coverage=1 00:02:47.748 --rc genhtml_function_coverage=1 00:02:47.748 --rc genhtml_legend=1 00:02:47.748 --rc geninfo_all_blocks=1 00:02:47.748 ' 00:02:47.748 14:48:06 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:47.748 --rc lcov_branch_coverage=1 00:02:47.748 --rc lcov_function_coverage=1 00:02:47.748 --rc genhtml_branch_coverage=1 00:02:47.748 --rc genhtml_function_coverage=1 00:02:47.748 --rc genhtml_legend=1 00:02:47.748 --rc geninfo_all_blocks=1 00:02:47.748 ' 00:02:47.748 14:48:06 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:47.748 --rc lcov_branch_coverage=1 00:02:47.748 --rc lcov_function_coverage=1 00:02:47.748 --rc genhtml_branch_coverage=1 00:02:47.748 --rc genhtml_function_coverage=1 00:02:47.748 --rc genhtml_legend=1 00:02:47.748 --rc geninfo_all_blocks=1 00:02:47.748 --no-external' 00:02:47.748 14:48:06 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:47.748 --rc lcov_branch_coverage=1 00:02:47.748 --rc lcov_function_coverage=1 00:02:47.748 --rc genhtml_branch_coverage=1 00:02:47.748 --rc genhtml_function_coverage=1 00:02:47.748 --rc genhtml_legend=1 00:02:47.748 --rc geninfo_all_blocks=1 00:02:47.748 --no-external' 00:02:47.748 14:48:06 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:47.748 lcov: LCOV version 1.14 00:02:47.748 14:48:06 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:02.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:02.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:02.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:02.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:02.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:02.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:17.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:17.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:17.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:17.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:17.514 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:20.050 14:48:38 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:20.050 14:48:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:20.050 14:48:38 -- common/autotest_common.sh@10 -- # set +x 00:03:20.050 14:48:38 -- spdk/autotest.sh@102 -- # rm -f 00:03:20.050 14:48:38 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.592 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:03:22.592 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:22.592 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:22.592 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:22.592 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:22.592 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:22.592 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:22.592 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:22.851 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:22.851 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:22.851 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:22.851 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:22.851 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:22.851 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:22.851 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:22.851 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:22.851 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:22.851 14:48:41 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:22.851 14:48:41 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:22.851 14:48:41 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:22.851 14:48:41 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:22.851 14:48:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:22.851 14:48:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:22.851 14:48:41 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:22.851 14:48:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:22.851 14:48:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:22.851 14:48:41 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:22.851 14:48:41 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:22.851 14:48:41 -- spdk/autotest.sh@121 -- # grep -v p 00:03:22.851 14:48:41 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:22.852 14:48:41 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:22.852 14:48:41 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:22.852 14:48:41 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:22.852 14:48:41 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:23.111 No valid GPT data, bailing 00:03:23.111 14:48:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:23.111 14:48:41 -- scripts/common.sh@393 -- # pt= 00:03:23.111 14:48:41 -- scripts/common.sh@394 -- # return 1 00:03:23.111 14:48:41 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:23.111 1+0 records in 00:03:23.111 1+0 records out 00:03:23.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00244077 s, 430 MB/s 00:03:23.111 14:48:41 -- spdk/autotest.sh@129 -- # sync 00:03:23.111 14:48:41 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:23.111 14:48:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:23.111 14:48:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:29.682 14:48:47 -- spdk/autotest.sh@135 -- # uname -s 00:03:29.682 14:48:47 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:29.682 14:48:47 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:29.682 14:48:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.682 14:48:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.682 14:48:47 -- common/autotest_common.sh@10 -- # set +x 00:03:29.682 ************************************ 00:03:29.682 START TEST setup.sh 00:03:29.682 ************************************ 00:03:29.682 14:48:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:29.682 * Looking for test storage... 00:03:29.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:29.682 14:48:47 -- setup/test-setup.sh@10 -- # uname -s 00:03:29.682 14:48:47 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:29.682 14:48:47 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:29.682 14:48:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.682 14:48:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.682 14:48:47 -- common/autotest_common.sh@10 -- # set +x 00:03:29.682 ************************************ 00:03:29.682 START TEST acl 00:03:29.682 ************************************ 00:03:29.682 14:48:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:29.682 * Looking for test storage... 00:03:29.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:29.682 14:48:47 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:29.682 14:48:47 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:29.682 14:48:47 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:29.682 14:48:47 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:29.682 14:48:47 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.682 14:48:47 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:29.682 14:48:47 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:29.682 14:48:47 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:29.682 14:48:47 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.682 14:48:47 -- setup/acl.sh@12 -- # devs=() 00:03:29.682 14:48:47 -- setup/acl.sh@12 -- # declare -a devs 00:03:29.682 14:48:47 -- setup/acl.sh@13 -- # drivers=() 00:03:29.682 14:48:47 -- setup/acl.sh@13 -- # declare -A drivers 00:03:29.682 14:48:47 -- setup/acl.sh@51 -- # setup reset 00:03:29.682 14:48:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.682 14:48:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.217 14:48:51 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:32.217 14:48:51 -- setup/acl.sh@16 -- # local dev driver 00:03:32.217 14:48:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.217 14:48:51 -- setup/acl.sh@15 -- # setup output status 00:03:32.217 14:48:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.217 14:48:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:35.507 Hugepages 00:03:35.507 node hugesize free / total 00:03:35.507 14:48:53 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:35.507 14:48:53 -- setup/acl.sh@19 -- # continue 00:03:35.507 14:48:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:53 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:35.507 14:48:53 -- setup/acl.sh@19 -- # continue 00:03:35.507 14:48:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:53 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:35.507 14:48:53 -- setup/acl.sh@19 -- # continue 00:03:35.507 14:48:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 00:03:35.507 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:35.507 14:48:53 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:35.507 14:48:53 -- setup/acl.sh@19 -- # continue 00:03:35.507 14:48:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # continue 00:03:35.507 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.507 14:48:54 -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:35.507 14:48:54 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:03:35.507 14:48:54 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:35.508 14:48:54 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:35.508 14:48:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.508 14:48:54 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:35.508 14:48:54 -- setup/acl.sh@54 -- # run_test denied denied 00:03:35.508 14:48:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.508 14:48:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.508 14:48:54 -- common/autotest_common.sh@10 -- # set +x 00:03:35.508 ************************************ 00:03:35.508 START TEST denied 00:03:35.508 ************************************ 00:03:35.508 14:48:54 -- common/autotest_common.sh@1104 -- # denied 00:03:35.508 14:48:54 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:03:35.508 14:48:54 -- setup/acl.sh@38 -- # setup output config 00:03:35.508 14:48:54 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:03:35.508 14:48:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.508 14:48:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.797 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:03:38.797 14:48:57 -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:03:38.797 14:48:57 -- setup/acl.sh@28 -- # local dev driver 00:03:38.797 14:48:57 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:38.797 14:48:57 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:03:38.797 14:48:57 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:03:38.797 14:48:57 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:38.797 14:48:57 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:38.797 14:48:57 -- setup/acl.sh@41 -- # setup reset 00:03:38.797 14:48:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.797 14:48:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.093 00:03:44.093 real 0m7.894s 00:03:44.093 user 0m2.556s 00:03:44.093 sys 0m4.626s 00:03:44.093 14:49:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.093 14:49:02 -- common/autotest_common.sh@10 -- # set +x 00:03:44.093 ************************************ 00:03:44.093 END TEST denied 00:03:44.093 ************************************ 00:03:44.093 14:49:02 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:44.093 14:49:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.093 14:49:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.093 14:49:02 -- common/autotest_common.sh@10 -- # set +x 00:03:44.093 ************************************ 00:03:44.093 START TEST allowed 00:03:44.093 ************************************ 00:03:44.093 14:49:02 -- common/autotest_common.sh@1104 -- # allowed 00:03:44.093 14:49:02 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:03:44.093 14:49:02 -- setup/acl.sh@45 -- # setup output config 00:03:44.093 14:49:02 -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:03:44.093 14:49:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.093 14:49:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.455 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.455 14:49:06 -- setup/acl.sh@47 -- # verify 00:03:48.455 14:49:06 -- setup/acl.sh@28 -- # local dev driver 00:03:48.455 14:49:06 -- setup/acl.sh@48 -- # setup reset 00:03:48.455 14:49:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.455 14:49:06 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.739 00:03:51.739 real 0m7.953s 00:03:51.739 user 0m2.534s 00:03:51.739 sys 0m4.513s 00:03:51.739 14:49:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.739 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:03:51.739 ************************************ 00:03:51.739 END TEST allowed 00:03:51.739 ************************************ 00:03:51.739 00:03:51.739 real 0m22.634s 00:03:51.739 user 0m7.610s 00:03:51.739 sys 0m13.635s 00:03:51.739 14:49:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.739 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:03:51.739 ************************************ 00:03:51.739 END TEST acl 00:03:51.739 ************************************ 00:03:51.739 14:49:10 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:51.739 14:49:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.739 14:49:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.739 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:03:51.739 ************************************ 00:03:51.739 START TEST hugepages 00:03:51.739 ************************************ 00:03:51.739 14:49:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:51.739 * Looking for test storage... 00:03:51.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.739 14:49:10 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:51.739 14:49:10 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:51.739 14:49:10 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:51.739 14:49:10 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:51.739 14:49:10 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:51.739 14:49:10 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:51.740 14:49:10 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:51.740 14:49:10 -- setup/common.sh@18 -- # local node= 00:03:51.740 14:49:10 -- setup/common.sh@19 -- # local var val 00:03:51.740 14:49:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.740 14:49:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.740 14:49:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.740 14:49:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.740 14:49:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.740 14:49:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 68929676 kB' 'MemAvailable: 72929616 kB' 'Buffers: 2704 kB' 'Cached: 14845664 kB' 'SwapCached: 0 kB' 'Active: 11668360 kB' 'Inactive: 3781800 kB' 'Active(anon): 11216776 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605308 kB' 'Mapped: 189812 kB' 'Shmem: 10614984 kB' 'KReclaimable: 602516 kB' 'Slab: 1291164 kB' 'SReclaimable: 602516 kB' 'SUnreclaim: 688648 kB' 'KernelStack: 22496 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434728 kB' 'Committed_AS: 12694928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221592 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.740 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.740 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # continue 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.741 14:49:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.741 14:49:10 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.741 14:49:10 -- setup/common.sh@33 -- # echo 2048 00:03:51.741 14:49:10 -- setup/common.sh@33 -- # return 0 00:03:51.741 14:49:10 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:51.741 14:49:10 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:51.741 14:49:10 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:51.741 14:49:10 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:51.741 14:49:10 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:51.741 14:49:10 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:51.741 14:49:10 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:51.741 14:49:10 -- setup/hugepages.sh@207 -- # get_nodes 00:03:51.741 14:49:10 -- setup/hugepages.sh@27 -- # local node 00:03:51.741 14:49:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.741 14:49:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:51.741 14:49:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.741 14:49:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:51.741 14:49:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.741 14:49:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.741 14:49:10 -- setup/hugepages.sh@208 -- # clear_hp 00:03:51.741 14:49:10 -- setup/hugepages.sh@37 -- # local node hp 00:03:51.741 14:49:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.741 14:49:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.741 14:49:10 -- setup/hugepages.sh@41 -- # echo 0 00:03:51.741 14:49:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.741 14:49:10 -- setup/hugepages.sh@41 -- # echo 0 00:03:51.741 14:49:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.741 14:49:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.741 14:49:10 -- setup/hugepages.sh@41 -- # echo 0 00:03:51.741 14:49:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.741 14:49:10 -- setup/hugepages.sh@41 -- # echo 0 00:03:51.741 14:49:10 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:51.741 14:49:10 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:51.741 14:49:10 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:51.741 14:49:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.741 14:49:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.741 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:03:51.741 ************************************ 00:03:51.741 START TEST default_setup 00:03:51.741 ************************************ 00:03:51.741 14:49:10 -- common/autotest_common.sh@1104 -- # default_setup 00:03:51.741 14:49:10 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:51.741 14:49:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.741 14:49:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:51.741 14:49:10 -- setup/hugepages.sh@51 -- # shift 00:03:51.741 14:49:10 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:51.741 14:49:10 -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.741 14:49:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.741 14:49:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.741 14:49:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:51.741 14:49:10 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:51.741 14:49:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.741 14:49:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.741 14:49:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.741 14:49:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.741 14:49:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.741 14:49:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:51.741 14:49:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.741 14:49:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:51.741 14:49:10 -- setup/hugepages.sh@73 -- # return 0 00:03:51.741 14:49:10 -- setup/hugepages.sh@137 -- # setup output 00:03:51.741 14:49:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.741 14:49:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.031 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:55.031 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:55.031 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:55.031 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:55.031 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:55.031 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:55.032 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:55.032 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:55.032 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:55.032 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:55.032 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:55.032 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:55.032 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:55.032 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:55.032 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:55.032 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:55.603 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.603 14:49:14 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:55.603 14:49:14 -- setup/hugepages.sh@89 -- # local node 00:03:55.603 14:49:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.603 14:49:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.603 14:49:14 -- setup/hugepages.sh@92 -- # local surp 00:03:55.603 14:49:14 -- setup/hugepages.sh@93 -- # local resv 00:03:55.603 14:49:14 -- setup/hugepages.sh@94 -- # local anon 00:03:55.603 14:49:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.603 14:49:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.604 14:49:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.604 14:49:14 -- setup/common.sh@18 -- # local node= 00:03:55.604 14:49:14 -- setup/common.sh@19 -- # local var val 00:03:55.604 14:49:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.604 14:49:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.604 14:49:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.604 14:49:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.604 14:49:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.604 14:49:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71078264 kB' 'MemAvailable: 75078172 kB' 'Buffers: 2704 kB' 'Cached: 14845764 kB' 'SwapCached: 0 kB' 'Active: 11687616 kB' 'Inactive: 3781800 kB' 'Active(anon): 11236032 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624400 kB' 'Mapped: 190196 kB' 'Shmem: 10615084 kB' 'KReclaimable: 602484 kB' 'Slab: 1290524 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 688040 kB' 'KernelStack: 22800 kB' 'PageTables: 9756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12717748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221736 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.604 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.604 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.605 14:49:14 -- setup/common.sh@33 -- # echo 0 00:03:55.605 14:49:14 -- setup/common.sh@33 -- # return 0 00:03:55.605 14:49:14 -- setup/hugepages.sh@97 -- # anon=0 00:03:55.605 14:49:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.605 14:49:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.605 14:49:14 -- setup/common.sh@18 -- # local node= 00:03:55.605 14:49:14 -- setup/common.sh@19 -- # local var val 00:03:55.605 14:49:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.605 14:49:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.605 14:49:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.605 14:49:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.605 14:49:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.605 14:49:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71081852 kB' 'MemAvailable: 75081760 kB' 'Buffers: 2704 kB' 'Cached: 14845764 kB' 'SwapCached: 0 kB' 'Active: 11688784 kB' 'Inactive: 3781800 kB' 'Active(anon): 11237200 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625544 kB' 'Mapped: 190196 kB' 'Shmem: 10615084 kB' 'KReclaimable: 602484 kB' 'Slab: 1290484 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 688000 kB' 'KernelStack: 22768 kB' 'PageTables: 9796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12719344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221768 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.605 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.605 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.606 14:49:14 -- setup/common.sh@33 -- # echo 0 00:03:55.606 14:49:14 -- setup/common.sh@33 -- # return 0 00:03:55.606 14:49:14 -- setup/hugepages.sh@99 -- # surp=0 00:03:55.606 14:49:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.606 14:49:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.606 14:49:14 -- setup/common.sh@18 -- # local node= 00:03:55.606 14:49:14 -- setup/common.sh@19 -- # local var val 00:03:55.606 14:49:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.606 14:49:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.606 14:49:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.606 14:49:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.606 14:49:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.606 14:49:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71078380 kB' 'MemAvailable: 75078288 kB' 'Buffers: 2704 kB' 'Cached: 14845780 kB' 'SwapCached: 0 kB' 'Active: 11690760 kB' 'Inactive: 3781800 kB' 'Active(anon): 11239176 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627472 kB' 'Mapped: 190516 kB' 'Shmem: 10615100 kB' 'KReclaimable: 602484 kB' 'Slab: 1290484 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 688000 kB' 'KernelStack: 22688 kB' 'PageTables: 9548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12722276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221740 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.606 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.606 14:49:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.607 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.607 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.608 14:49:14 -- setup/common.sh@33 -- # echo 0 00:03:55.608 14:49:14 -- setup/common.sh@33 -- # return 0 00:03:55.608 14:49:14 -- setup/hugepages.sh@100 -- # resv=0 00:03:55.608 14:49:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.608 nr_hugepages=1024 00:03:55.608 14:49:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.608 resv_hugepages=0 00:03:55.608 14:49:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.608 surplus_hugepages=0 00:03:55.608 14:49:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.608 anon_hugepages=0 00:03:55.608 14:49:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.608 14:49:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.608 14:49:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.608 14:49:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.608 14:49:14 -- setup/common.sh@18 -- # local node= 00:03:55.608 14:49:14 -- setup/common.sh@19 -- # local var val 00:03:55.608 14:49:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.608 14:49:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.608 14:49:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.608 14:49:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.608 14:49:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.608 14:49:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71084504 kB' 'MemAvailable: 75084412 kB' 'Buffers: 2704 kB' 'Cached: 14845780 kB' 'SwapCached: 0 kB' 'Active: 11685608 kB' 'Inactive: 3781800 kB' 'Active(anon): 11234024 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622324 kB' 'Mapped: 190012 kB' 'Shmem: 10615100 kB' 'KReclaimable: 602484 kB' 'Slab: 1290600 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 688116 kB' 'KernelStack: 22704 kB' 'PageTables: 9756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12716324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221752 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.608 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.608 14:49:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.609 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.609 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.869 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.869 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.869 14:49:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.869 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.869 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.869 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.869 14:49:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.869 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.869 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.869 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.869 14:49:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.869 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.869 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.869 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.869 14:49:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.869 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.869 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.869 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.869 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.869 14:49:14 -- setup/common.sh@33 -- # echo 1024 00:03:55.869 14:49:14 -- setup/common.sh@33 -- # return 0 00:03:55.870 14:49:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.870 14:49:14 -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.870 14:49:14 -- setup/hugepages.sh@27 -- # local node 00:03:55.870 14:49:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.870 14:49:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.870 14:49:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.870 14:49:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:55.870 14:49:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.870 14:49:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.870 14:49:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.870 14:49:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.870 14:49:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.870 14:49:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.870 14:49:14 -- setup/common.sh@18 -- # local node=0 00:03:55.870 14:49:14 -- setup/common.sh@19 -- # local var val 00:03:55.870 14:49:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.870 14:49:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.870 14:49:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.870 14:49:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.870 14:49:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.870 14:49:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 35456976 kB' 'MemUsed: 12611420 kB' 'SwapCached: 0 kB' 'Active: 8428816 kB' 'Inactive: 288032 kB' 'Active(anon): 8258028 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 288032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8451164 kB' 'Mapped: 41728 kB' 'AnonPages: 268888 kB' 'Shmem: 7992344 kB' 'KernelStack: 11944 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322192 kB' 'Slab: 667652 kB' 'SReclaimable: 322192 kB' 'SUnreclaim: 345460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.870 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.870 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # continue 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.871 14:49:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.871 14:49:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.871 14:49:14 -- setup/common.sh@33 -- # echo 0 00:03:55.871 14:49:14 -- setup/common.sh@33 -- # return 0 00:03:55.871 14:49:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.871 14:49:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.871 14:49:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.871 14:49:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.871 14:49:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.871 node0=1024 expecting 1024 00:03:55.871 14:49:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.871 00:03:55.871 real 0m4.209s 00:03:55.871 user 0m1.242s 00:03:55.871 sys 0m2.163s 00:03:55.871 14:49:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.871 14:49:14 -- common/autotest_common.sh@10 -- # set +x 00:03:55.871 ************************************ 00:03:55.871 END TEST default_setup 00:03:55.871 ************************************ 00:03:55.871 14:49:14 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:55.871 14:49:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:55.871 14:49:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:55.872 14:49:14 -- common/autotest_common.sh@10 -- # set +x 00:03:55.872 ************************************ 00:03:55.872 START TEST per_node_1G_alloc 00:03:55.872 ************************************ 00:03:55.872 14:49:14 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:55.872 14:49:14 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:55.872 14:49:14 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:55.872 14:49:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:55.872 14:49:14 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:55.872 14:49:14 -- setup/hugepages.sh@51 -- # shift 00:03:55.872 14:49:14 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:55.872 14:49:14 -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.872 14:49:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.872 14:49:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.872 14:49:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:55.872 14:49:14 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:55.872 14:49:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.872 14:49:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.872 14:49:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.872 14:49:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.872 14:49:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.872 14:49:14 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:55.872 14:49:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.872 14:49:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:55.872 14:49:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.872 14:49:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:55.872 14:49:14 -- setup/hugepages.sh@73 -- # return 0 00:03:55.872 14:49:14 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:55.872 14:49:14 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:55.872 14:49:14 -- setup/hugepages.sh@146 -- # setup output 00:03:55.872 14:49:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.872 14:49:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.163 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.163 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.163 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.163 14:49:17 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:59.163 14:49:17 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:59.163 14:49:17 -- setup/hugepages.sh@89 -- # local node 00:03:59.163 14:49:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.163 14:49:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.163 14:49:17 -- setup/hugepages.sh@92 -- # local surp 00:03:59.163 14:49:17 -- setup/hugepages.sh@93 -- # local resv 00:03:59.163 14:49:17 -- setup/hugepages.sh@94 -- # local anon 00:03:59.163 14:49:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.163 14:49:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.163 14:49:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.163 14:49:17 -- setup/common.sh@18 -- # local node= 00:03:59.163 14:49:17 -- setup/common.sh@19 -- # local var val 00:03:59.163 14:49:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.163 14:49:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.163 14:49:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.163 14:49:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.163 14:49:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.163 14:49:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.163 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.163 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.163 14:49:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71073316 kB' 'MemAvailable: 75073224 kB' 'Buffers: 2704 kB' 'Cached: 14846028 kB' 'SwapCached: 0 kB' 'Active: 11686496 kB' 'Inactive: 3781800 kB' 'Active(anon): 11234912 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622732 kB' 'Mapped: 189976 kB' 'Shmem: 10615348 kB' 'KReclaimable: 602484 kB' 'Slab: 1290948 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 688464 kB' 'KernelStack: 22768 kB' 'PageTables: 9872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12715272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221848 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:03:59.163 14:49:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.164 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.164 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 14:49:17 -- setup/common.sh@33 -- # echo 0 00:03:59.165 14:49:17 -- setup/common.sh@33 -- # return 0 00:03:59.165 14:49:17 -- setup/hugepages.sh@97 -- # anon=0 00:03:59.165 14:49:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.165 14:49:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.165 14:49:17 -- setup/common.sh@18 -- # local node= 00:03:59.165 14:49:17 -- setup/common.sh@19 -- # local var val 00:03:59.165 14:49:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.165 14:49:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.165 14:49:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.165 14:49:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.165 14:49:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.165 14:49:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71072816 kB' 'MemAvailable: 75072724 kB' 'Buffers: 2704 kB' 'Cached: 14846028 kB' 'SwapCached: 0 kB' 'Active: 11687924 kB' 'Inactive: 3781800 kB' 'Active(anon): 11236340 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623776 kB' 'Mapped: 190052 kB' 'Shmem: 10615348 kB' 'KReclaimable: 602484 kB' 'Slab: 1291036 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 688552 kB' 'KernelStack: 22960 kB' 'PageTables: 10388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12716800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221832 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.165 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.166 14:49:17 -- setup/common.sh@33 -- # echo 0 00:03:59.166 14:49:17 -- setup/common.sh@33 -- # return 0 00:03:59.166 14:49:17 -- setup/hugepages.sh@99 -- # surp=0 00:03:59.166 14:49:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.166 14:49:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.166 14:49:17 -- setup/common.sh@18 -- # local node= 00:03:59.166 14:49:17 -- setup/common.sh@19 -- # local var val 00:03:59.166 14:49:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.166 14:49:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.166 14:49:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.166 14:49:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.166 14:49:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.166 14:49:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71074748 kB' 'MemAvailable: 75074656 kB' 'Buffers: 2704 kB' 'Cached: 14846040 kB' 'SwapCached: 0 kB' 'Active: 11686112 kB' 'Inactive: 3781800 kB' 'Active(anon): 11234528 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621888 kB' 'Mapped: 189932 kB' 'Shmem: 10615360 kB' 'KReclaimable: 602484 kB' 'Slab: 1290996 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 688512 kB' 'KernelStack: 22608 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12713152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221752 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.166 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.167 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.168 14:49:17 -- setup/common.sh@33 -- # echo 0 00:03:59.168 14:49:17 -- setup/common.sh@33 -- # return 0 00:03:59.168 14:49:17 -- setup/hugepages.sh@100 -- # resv=0 00:03:59.168 14:49:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.168 nr_hugepages=1024 00:03:59.168 14:49:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.168 resv_hugepages=0 00:03:59.168 14:49:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.168 surplus_hugepages=0 00:03:59.168 14:49:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.168 anon_hugepages=0 00:03:59.168 14:49:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.168 14:49:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.168 14:49:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.168 14:49:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.168 14:49:17 -- setup/common.sh@18 -- # local node= 00:03:59.168 14:49:17 -- setup/common.sh@19 -- # local var val 00:03:59.168 14:49:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.168 14:49:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.168 14:49:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.168 14:49:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.168 14:49:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.168 14:49:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71075764 kB' 'MemAvailable: 75075672 kB' 'Buffers: 2704 kB' 'Cached: 14846056 kB' 'SwapCached: 0 kB' 'Active: 11687128 kB' 'Inactive: 3781800 kB' 'Active(anon): 11235544 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623392 kB' 'Mapped: 189932 kB' 'Shmem: 10615376 kB' 'KReclaimable: 602484 kB' 'Slab: 1290828 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 688344 kB' 'KernelStack: 22448 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12705488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221672 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.168 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.169 14:49:17 -- setup/common.sh@33 -- # echo 1024 00:03:59.169 14:49:17 -- setup/common.sh@33 -- # return 0 00:03:59.169 14:49:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.169 14:49:17 -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.169 14:49:17 -- setup/hugepages.sh@27 -- # local node 00:03:59.169 14:49:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.169 14:49:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.169 14:49:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.169 14:49:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.169 14:49:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.169 14:49:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.169 14:49:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.169 14:49:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.169 14:49:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.169 14:49:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.169 14:49:17 -- setup/common.sh@18 -- # local node=0 00:03:59.169 14:49:17 -- setup/common.sh@19 -- # local var val 00:03:59.169 14:49:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.169 14:49:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.169 14:49:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.169 14:49:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.169 14:49:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.169 14:49:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 36513364 kB' 'MemUsed: 11555032 kB' 'SwapCached: 0 kB' 'Active: 8433208 kB' 'Inactive: 288032 kB' 'Active(anon): 8262420 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 288032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8451264 kB' 'Mapped: 42224 kB' 'AnonPages: 273200 kB' 'Shmem: 7992444 kB' 'KernelStack: 11976 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322192 kB' 'Slab: 667776 kB' 'SReclaimable: 322192 kB' 'SUnreclaim: 345584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.170 14:49:17 -- setup/common.sh@33 -- # echo 0 00:03:59.170 14:49:17 -- setup/common.sh@33 -- # return 0 00:03:59.170 14:49:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.170 14:49:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.170 14:49:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.170 14:49:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.170 14:49:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.170 14:49:17 -- setup/common.sh@18 -- # local node=1 00:03:59.170 14:49:17 -- setup/common.sh@19 -- # local var val 00:03:59.170 14:49:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.170 14:49:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.170 14:49:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.170 14:49:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.171 14:49:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.171 14:49:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218156 kB' 'MemFree: 34562772 kB' 'MemUsed: 9655384 kB' 'SwapCached: 0 kB' 'Active: 3255408 kB' 'Inactive: 3493768 kB' 'Active(anon): 2974612 kB' 'Inactive(anon): 0 kB' 'Active(file): 280796 kB' 'Inactive(file): 3493768 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6397512 kB' 'Mapped: 147544 kB' 'AnonPages: 351808 kB' 'Shmem: 2622948 kB' 'KernelStack: 10520 kB' 'PageTables: 4852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280292 kB' 'Slab: 623048 kB' 'SReclaimable: 280292 kB' 'SUnreclaim: 342756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.171 14:49:17 -- setup/common.sh@32 -- # continue 00:03:59.172 14:49:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.172 14:49:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.172 14:49:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.172 14:49:17 -- setup/common.sh@33 -- # echo 0 00:03:59.172 14:49:17 -- setup/common.sh@33 -- # return 0 00:03:59.172 14:49:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.172 14:49:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.172 14:49:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.172 14:49:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.172 14:49:17 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.172 node0=512 expecting 512 00:03:59.172 14:49:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.172 14:49:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.172 14:49:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.172 14:49:17 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:59.172 node1=512 expecting 512 00:03:59.172 14:49:17 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:59.172 00:03:59.172 real 0m3.191s 00:03:59.172 user 0m1.183s 00:03:59.172 sys 0m1.890s 00:03:59.172 14:49:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.172 14:49:17 -- common/autotest_common.sh@10 -- # set +x 00:03:59.172 ************************************ 00:03:59.172 END TEST per_node_1G_alloc 00:03:59.172 ************************************ 00:03:59.172 14:49:17 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:59.172 14:49:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.172 14:49:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.172 14:49:17 -- common/autotest_common.sh@10 -- # set +x 00:03:59.172 ************************************ 00:03:59.172 START TEST even_2G_alloc 00:03:59.172 ************************************ 00:03:59.172 14:49:17 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:59.172 14:49:17 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:59.172 14:49:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.172 14:49:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.172 14:49:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.172 14:49:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.172 14:49:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.172 14:49:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.172 14:49:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.172 14:49:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.172 14:49:17 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.172 14:49:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.172 14:49:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.172 14:49:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.172 14:49:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.172 14:49:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.172 14:49:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.172 14:49:17 -- setup/hugepages.sh@83 -- # : 512 00:03:59.172 14:49:17 -- setup/hugepages.sh@84 -- # : 1 00:03:59.172 14:49:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.172 14:49:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.172 14:49:17 -- setup/hugepages.sh@83 -- # : 0 00:03:59.172 14:49:17 -- setup/hugepages.sh@84 -- # : 0 00:03:59.172 14:49:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.172 14:49:17 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:59.172 14:49:17 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:59.172 14:49:17 -- setup/hugepages.sh@153 -- # setup output 00:03:59.172 14:49:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.172 14:49:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.467 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.467 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.467 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.467 14:49:20 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:02.467 14:49:20 -- setup/hugepages.sh@89 -- # local node 00:04:02.467 14:49:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.467 14:49:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.467 14:49:20 -- setup/hugepages.sh@92 -- # local surp 00:04:02.467 14:49:20 -- setup/hugepages.sh@93 -- # local resv 00:04:02.467 14:49:20 -- setup/hugepages.sh@94 -- # local anon 00:04:02.467 14:49:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.467 14:49:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.467 14:49:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.467 14:49:20 -- setup/common.sh@18 -- # local node= 00:04:02.467 14:49:20 -- setup/common.sh@19 -- # local var val 00:04:02.467 14:49:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.467 14:49:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.467 14:49:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.468 14:49:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.468 14:49:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.468 14:49:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71062400 kB' 'MemAvailable: 75062308 kB' 'Buffers: 2704 kB' 'Cached: 14846160 kB' 'SwapCached: 0 kB' 'Active: 11684548 kB' 'Inactive: 3781800 kB' 'Active(anon): 11232964 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620772 kB' 'Mapped: 189884 kB' 'Shmem: 10615480 kB' 'KReclaimable: 602484 kB' 'Slab: 1290308 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687824 kB' 'KernelStack: 22512 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12735976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221736 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:20 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.468 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.468 14:49:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.469 14:49:21 -- setup/common.sh@33 -- # echo 0 00:04:02.469 14:49:21 -- setup/common.sh@33 -- # return 0 00:04:02.469 14:49:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:02.469 14:49:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.469 14:49:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.469 14:49:21 -- setup/common.sh@18 -- # local node= 00:04:02.469 14:49:21 -- setup/common.sh@19 -- # local var val 00:04:02.469 14:49:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.469 14:49:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.469 14:49:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.469 14:49:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.469 14:49:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.469 14:49:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71071928 kB' 'MemAvailable: 75071836 kB' 'Buffers: 2704 kB' 'Cached: 14846164 kB' 'SwapCached: 0 kB' 'Active: 11684260 kB' 'Inactive: 3781800 kB' 'Active(anon): 11232676 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620560 kB' 'Mapped: 189840 kB' 'Shmem: 10615484 kB' 'KReclaimable: 602484 kB' 'Slab: 1290308 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687824 kB' 'KernelStack: 22528 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12735988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221688 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.469 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.469 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.470 14:49:21 -- setup/common.sh@33 -- # echo 0 00:04:02.470 14:49:21 -- setup/common.sh@33 -- # return 0 00:04:02.470 14:49:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:02.470 14:49:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.470 14:49:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.470 14:49:21 -- setup/common.sh@18 -- # local node= 00:04:02.470 14:49:21 -- setup/common.sh@19 -- # local var val 00:04:02.470 14:49:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.470 14:49:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.470 14:49:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.470 14:49:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.470 14:49:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.470 14:49:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71073436 kB' 'MemAvailable: 75073344 kB' 'Buffers: 2704 kB' 'Cached: 14846184 kB' 'SwapCached: 0 kB' 'Active: 11684648 kB' 'Inactive: 3781800 kB' 'Active(anon): 11233064 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620936 kB' 'Mapped: 189840 kB' 'Shmem: 10615504 kB' 'KReclaimable: 602484 kB' 'Slab: 1290264 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687780 kB' 'KernelStack: 22544 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12736004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221688 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.470 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.470 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.471 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.472 14:49:21 -- setup/common.sh@33 -- # echo 0 00:04:02.472 14:49:21 -- setup/common.sh@33 -- # return 0 00:04:02.472 14:49:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:02.472 14:49:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.472 nr_hugepages=1024 00:04:02.472 14:49:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.472 resv_hugepages=0 00:04:02.472 14:49:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.472 surplus_hugepages=0 00:04:02.472 14:49:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.472 anon_hugepages=0 00:04:02.472 14:49:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.472 14:49:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.472 14:49:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.472 14:49:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.472 14:49:21 -- setup/common.sh@18 -- # local node= 00:04:02.472 14:49:21 -- setup/common.sh@19 -- # local var val 00:04:02.472 14:49:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.472 14:49:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.472 14:49:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.472 14:49:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.472 14:49:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.472 14:49:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.472 14:49:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71073436 kB' 'MemAvailable: 75073344 kB' 'Buffers: 2704 kB' 'Cached: 14846188 kB' 'SwapCached: 0 kB' 'Active: 11684304 kB' 'Inactive: 3781800 kB' 'Active(anon): 11232720 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620608 kB' 'Mapped: 189840 kB' 'Shmem: 10615508 kB' 'KReclaimable: 602484 kB' 'Slab: 1290264 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687780 kB' 'KernelStack: 22528 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12736016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221688 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.472 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.473 14:49:21 -- setup/common.sh@33 -- # echo 1024 00:04:02.473 14:49:21 -- setup/common.sh@33 -- # return 0 00:04:02.473 14:49:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.473 14:49:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.473 14:49:21 -- setup/hugepages.sh@27 -- # local node 00:04:02.473 14:49:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.473 14:49:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.473 14:49:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.473 14:49:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.473 14:49:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.473 14:49:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.473 14:49:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.473 14:49:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.473 14:49:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.473 14:49:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.473 14:49:21 -- setup/common.sh@18 -- # local node=0 00:04:02.473 14:49:21 -- setup/common.sh@19 -- # local var val 00:04:02.473 14:49:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.473 14:49:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.473 14:49:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.473 14:49:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.473 14:49:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.473 14:49:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 36509604 kB' 'MemUsed: 11558792 kB' 'SwapCached: 0 kB' 'Active: 8429052 kB' 'Inactive: 288032 kB' 'Active(anon): 8258264 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 288032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8451368 kB' 'Mapped: 42288 kB' 'AnonPages: 268992 kB' 'Shmem: 7992548 kB' 'KernelStack: 12024 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322192 kB' 'Slab: 667472 kB' 'SReclaimable: 322192 kB' 'SUnreclaim: 345280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 14:49:21 -- setup/common.sh@33 -- # echo 0 00:04:02.474 14:49:21 -- setup/common.sh@33 -- # return 0 00:04:02.474 14:49:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.474 14:49:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.474 14:49:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.474 14:49:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.474 14:49:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.474 14:49:21 -- setup/common.sh@18 -- # local node=1 00:04:02.474 14:49:21 -- setup/common.sh@19 -- # local var val 00:04:02.474 14:49:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.474 14:49:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.474 14:49:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.474 14:49:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.474 14:49:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.474 14:49:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.474 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218156 kB' 'MemFree: 34564288 kB' 'MemUsed: 9653868 kB' 'SwapCached: 0 kB' 'Active: 3255220 kB' 'Inactive: 3493768 kB' 'Active(anon): 2974424 kB' 'Inactive(anon): 0 kB' 'Active(file): 280796 kB' 'Inactive(file): 3493768 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6397540 kB' 'Mapped: 147552 kB' 'AnonPages: 351512 kB' 'Shmem: 2622976 kB' 'KernelStack: 10488 kB' 'PageTables: 4752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280292 kB' 'Slab: 622792 kB' 'SReclaimable: 280292 kB' 'SUnreclaim: 342500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # continue 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 14:49:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 14:49:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.475 14:49:21 -- setup/common.sh@33 -- # echo 0 00:04:02.475 14:49:21 -- setup/common.sh@33 -- # return 0 00:04:02.475 14:49:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.475 14:49:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.475 14:49:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.475 14:49:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.475 14:49:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.475 node0=512 expecting 512 00:04:02.475 14:49:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.475 14:49:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.476 14:49:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.476 14:49:21 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:02.476 node1=512 expecting 512 00:04:02.476 14:49:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:02.476 00:04:02.476 real 0m3.440s 00:04:02.476 user 0m1.357s 00:04:02.476 sys 0m2.137s 00:04:02.476 14:49:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.476 14:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:02.476 ************************************ 00:04:02.476 END TEST even_2G_alloc 00:04:02.476 ************************************ 00:04:02.476 14:49:21 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:02.476 14:49:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:02.476 14:49:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:02.476 14:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:02.476 ************************************ 00:04:02.476 START TEST odd_alloc 00:04:02.476 ************************************ 00:04:02.476 14:49:21 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:02.476 14:49:21 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:02.476 14:49:21 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:02.476 14:49:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.476 14:49:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.476 14:49:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:02.476 14:49:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.476 14:49:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.476 14:49:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.476 14:49:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:02.476 14:49:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.476 14:49:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.476 14:49:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.476 14:49:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.476 14:49:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.476 14:49:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.476 14:49:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:02.476 14:49:21 -- setup/hugepages.sh@83 -- # : 513 00:04:02.476 14:49:21 -- setup/hugepages.sh@84 -- # : 1 00:04:02.476 14:49:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.476 14:49:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:02.476 14:49:21 -- setup/hugepages.sh@83 -- # : 0 00:04:02.476 14:49:21 -- setup/hugepages.sh@84 -- # : 0 00:04:02.476 14:49:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.476 14:49:21 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:02.476 14:49:21 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:02.476 14:49:21 -- setup/hugepages.sh@160 -- # setup output 00:04:02.476 14:49:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.476 14:49:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.764 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.764 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.764 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.764 14:49:24 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:05.764 14:49:24 -- setup/hugepages.sh@89 -- # local node 00:04:05.764 14:49:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.764 14:49:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.764 14:49:24 -- setup/hugepages.sh@92 -- # local surp 00:04:05.764 14:49:24 -- setup/hugepages.sh@93 -- # local resv 00:04:05.764 14:49:24 -- setup/hugepages.sh@94 -- # local anon 00:04:05.764 14:49:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.764 14:49:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.764 14:49:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.764 14:49:24 -- setup/common.sh@18 -- # local node= 00:04:05.764 14:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.764 14:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.764 14:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.764 14:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.764 14:49:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.764 14:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.764 14:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71081444 kB' 'MemAvailable: 75081352 kB' 'Buffers: 2704 kB' 'Cached: 14846288 kB' 'SwapCached: 0 kB' 'Active: 11685388 kB' 'Inactive: 3781800 kB' 'Active(anon): 11233804 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621492 kB' 'Mapped: 189888 kB' 'Shmem: 10615608 kB' 'KReclaimable: 602484 kB' 'Slab: 1289860 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687376 kB' 'KernelStack: 22560 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 12736620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221752 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.764 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.764 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.765 14:49:24 -- setup/common.sh@33 -- # echo 0 00:04:05.765 14:49:24 -- setup/common.sh@33 -- # return 0 00:04:05.765 14:49:24 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.765 14:49:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.765 14:49:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.765 14:49:24 -- setup/common.sh@18 -- # local node= 00:04:05.765 14:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.765 14:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.765 14:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.765 14:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.765 14:49:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.765 14:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.765 14:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71082564 kB' 'MemAvailable: 75082472 kB' 'Buffers: 2704 kB' 'Cached: 14846292 kB' 'SwapCached: 0 kB' 'Active: 11685548 kB' 'Inactive: 3781800 kB' 'Active(anon): 11233964 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621748 kB' 'Mapped: 189848 kB' 'Shmem: 10615612 kB' 'KReclaimable: 602484 kB' 'Slab: 1289812 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687328 kB' 'KernelStack: 22544 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 12737780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221720 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.765 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.765 14:49:24 -- setup/common.sh@33 -- # echo 0 00:04:05.765 14:49:24 -- setup/common.sh@33 -- # return 0 00:04:05.765 14:49:24 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.765 14:49:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.765 14:49:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.765 14:49:24 -- setup/common.sh@18 -- # local node= 00:04:05.765 14:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.765 14:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.765 14:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.765 14:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.765 14:49:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.765 14:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.765 14:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.765 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71083172 kB' 'MemAvailable: 75083080 kB' 'Buffers: 2704 kB' 'Cached: 14846304 kB' 'SwapCached: 0 kB' 'Active: 11685292 kB' 'Inactive: 3781800 kB' 'Active(anon): 11233708 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621436 kB' 'Mapped: 189848 kB' 'Shmem: 10615624 kB' 'KReclaimable: 602484 kB' 'Slab: 1289892 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687408 kB' 'KernelStack: 22560 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 12739680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221688 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.766 14:49:24 -- setup/common.sh@33 -- # echo 0 00:04:05.766 14:49:24 -- setup/common.sh@33 -- # return 0 00:04:05.766 14:49:24 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.766 14:49:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:05.766 nr_hugepages=1025 00:04:05.766 14:49:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.766 resv_hugepages=0 00:04:05.766 14:49:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.766 surplus_hugepages=0 00:04:05.766 14:49:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.766 anon_hugepages=0 00:04:05.766 14:49:24 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:05.766 14:49:24 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:05.766 14:49:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.766 14:49:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.766 14:49:24 -- setup/common.sh@18 -- # local node= 00:04:05.766 14:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.766 14:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.766 14:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.766 14:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.766 14:49:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.766 14:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.766 14:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71082072 kB' 'MemAvailable: 75081980 kB' 'Buffers: 2704 kB' 'Cached: 14846316 kB' 'SwapCached: 0 kB' 'Active: 11685912 kB' 'Inactive: 3781800 kB' 'Active(anon): 11234328 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622060 kB' 'Mapped: 189848 kB' 'Shmem: 10615636 kB' 'KReclaimable: 602484 kB' 'Slab: 1289892 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687408 kB' 'KernelStack: 22656 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 12741052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221816 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.766 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.766 14:49:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.767 14:49:24 -- setup/common.sh@33 -- # echo 1025 00:04:05.767 14:49:24 -- setup/common.sh@33 -- # return 0 00:04:05.767 14:49:24 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:05.767 14:49:24 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.767 14:49:24 -- setup/hugepages.sh@27 -- # local node 00:04:05.767 14:49:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.767 14:49:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.767 14:49:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.767 14:49:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:05.767 14:49:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.767 14:49:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.767 14:49:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.767 14:49:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.767 14:49:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.767 14:49:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.767 14:49:24 -- setup/common.sh@18 -- # local node=0 00:04:05.767 14:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.767 14:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.767 14:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.767 14:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.767 14:49:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.767 14:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.767 14:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 36518584 kB' 'MemUsed: 11549812 kB' 'SwapCached: 0 kB' 'Active: 8428776 kB' 'Inactive: 288032 kB' 'Active(anon): 8257988 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 288032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8451448 kB' 'Mapped: 42296 kB' 'AnonPages: 268524 kB' 'Shmem: 7992628 kB' 'KernelStack: 12008 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322192 kB' 'Slab: 667100 kB' 'SReclaimable: 322192 kB' 'SUnreclaim: 344908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.767 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.767 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # continue 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.768 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.768 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.768 14:49:24 -- setup/common.sh@33 -- # echo 0 00:04:05.768 14:49:24 -- setup/common.sh@33 -- # return 0 00:04:05.768 14:49:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.768 14:49:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.768 14:49:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.768 14:49:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:05.768 14:49:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.768 14:49:24 -- setup/common.sh@18 -- # local node=1 00:04:05.768 14:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.768 14:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.768 14:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.768 14:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:05.768 14:49:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.028 14:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.028 14:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218156 kB' 'MemFree: 34561020 kB' 'MemUsed: 9657136 kB' 'SwapCached: 0 kB' 'Active: 3257468 kB' 'Inactive: 3493768 kB' 'Active(anon): 2976672 kB' 'Inactive(anon): 0 kB' 'Active(file): 280796 kB' 'Inactive(file): 3493768 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6397588 kB' 'Mapped: 147552 kB' 'AnonPages: 353828 kB' 'Shmem: 2623024 kB' 'KernelStack: 10744 kB' 'PageTables: 5392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280292 kB' 'Slab: 622760 kB' 'SReclaimable: 280292 kB' 'SUnreclaim: 342468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.028 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.028 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # continue 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.029 14:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.029 14:49:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.029 14:49:24 -- setup/common.sh@33 -- # echo 0 00:04:06.029 14:49:24 -- setup/common.sh@33 -- # return 0 00:04:06.029 14:49:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.029 14:49:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.029 14:49:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.029 14:49:24 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:06.029 node0=512 expecting 513 00:04:06.029 14:49:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.029 14:49:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.029 14:49:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.029 14:49:24 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:06.029 node1=513 expecting 512 00:04:06.029 14:49:24 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:06.029 00:04:06.029 real 0m3.421s 00:04:06.029 user 0m1.386s 00:04:06.029 sys 0m2.071s 00:04:06.029 14:49:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.029 14:49:24 -- common/autotest_common.sh@10 -- # set +x 00:04:06.029 ************************************ 00:04:06.029 END TEST odd_alloc 00:04:06.029 ************************************ 00:04:06.029 14:49:24 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:06.029 14:49:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.029 14:49:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.029 14:49:24 -- common/autotest_common.sh@10 -- # set +x 00:04:06.029 ************************************ 00:04:06.029 START TEST custom_alloc 00:04:06.029 ************************************ 00:04:06.029 14:49:24 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:06.029 14:49:24 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:06.029 14:49:24 -- setup/hugepages.sh@169 -- # local node 00:04:06.029 14:49:24 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:06.029 14:49:24 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:06.029 14:49:24 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:06.029 14:49:24 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:06.029 14:49:24 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:06.029 14:49:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:06.029 14:49:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.029 14:49:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.029 14:49:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.029 14:49:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:06.029 14:49:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.029 14:49:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.029 14:49:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.029 14:49:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:06.029 14:49:24 -- setup/hugepages.sh@83 -- # : 256 00:04:06.029 14:49:24 -- setup/hugepages.sh@84 -- # : 1 00:04:06.029 14:49:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:06.029 14:49:24 -- setup/hugepages.sh@83 -- # : 0 00:04:06.029 14:49:24 -- setup/hugepages.sh@84 -- # : 0 00:04:06.029 14:49:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:06.029 14:49:24 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:06.029 14:49:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.029 14:49:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.029 14:49:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.029 14:49:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.029 14:49:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.029 14:49:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.029 14:49:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.029 14:49:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.029 14:49:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.029 14:49:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.029 14:49:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:06.029 14:49:24 -- setup/hugepages.sh@78 -- # return 0 00:04:06.029 14:49:24 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:06.029 14:49:24 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:06.029 14:49:24 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:06.029 14:49:24 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:06.029 14:49:24 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:06.029 14:49:24 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:06.029 14:49:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.029 14:49:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.029 14:49:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.029 14:49:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.029 14:49:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.029 14:49:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.029 14:49:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:06.029 14:49:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.029 14:49:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:06.029 14:49:24 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.029 14:49:24 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:06.029 14:49:24 -- setup/hugepages.sh@78 -- # return 0 00:04:06.029 14:49:24 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:06.029 14:49:24 -- setup/hugepages.sh@187 -- # setup output 00:04:06.029 14:49:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.029 14:49:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.326 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.326 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.326 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.326 14:49:27 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:09.326 14:49:27 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:09.326 14:49:27 -- setup/hugepages.sh@89 -- # local node 00:04:09.326 14:49:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.326 14:49:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.326 14:49:27 -- setup/hugepages.sh@92 -- # local surp 00:04:09.326 14:49:27 -- setup/hugepages.sh@93 -- # local resv 00:04:09.326 14:49:27 -- setup/hugepages.sh@94 -- # local anon 00:04:09.326 14:49:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.326 14:49:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.326 14:49:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.326 14:49:27 -- setup/common.sh@18 -- # local node= 00:04:09.326 14:49:27 -- setup/common.sh@19 -- # local var val 00:04:09.326 14:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.326 14:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.326 14:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.326 14:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.326 14:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.326 14:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 70055732 kB' 'MemAvailable: 74055640 kB' 'Buffers: 2704 kB' 'Cached: 14846428 kB' 'SwapCached: 0 kB' 'Active: 11685572 kB' 'Inactive: 3781800 kB' 'Active(anon): 11233988 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621588 kB' 'Mapped: 189956 kB' 'Shmem: 10615748 kB' 'KReclaimable: 602484 kB' 'Slab: 1290336 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687852 kB' 'KernelStack: 22560 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 12737276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221704 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 14:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.327 14:49:27 -- setup/common.sh@33 -- # echo 0 00:04:09.327 14:49:27 -- setup/common.sh@33 -- # return 0 00:04:09.327 14:49:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.327 14:49:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.327 14:49:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.327 14:49:27 -- setup/common.sh@18 -- # local node= 00:04:09.327 14:49:27 -- setup/common.sh@19 -- # local var val 00:04:09.327 14:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.327 14:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.327 14:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.327 14:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.327 14:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.327 14:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 70059980 kB' 'MemAvailable: 74059888 kB' 'Buffers: 2704 kB' 'Cached: 14846424 kB' 'SwapCached: 0 kB' 'Active: 11684428 kB' 'Inactive: 3781800 kB' 'Active(anon): 11232844 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620476 kB' 'Mapped: 189436 kB' 'Shmem: 10615744 kB' 'KReclaimable: 602484 kB' 'Slab: 1290352 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687868 kB' 'KernelStack: 22496 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 12702884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221592 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.327 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.328 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.329 14:49:27 -- setup/common.sh@33 -- # echo 0 00:04:09.329 14:49:27 -- setup/common.sh@33 -- # return 0 00:04:09.329 14:49:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.329 14:49:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.329 14:49:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.329 14:49:27 -- setup/common.sh@18 -- # local node= 00:04:09.329 14:49:27 -- setup/common.sh@19 -- # local var val 00:04:09.329 14:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.329 14:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.329 14:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.329 14:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.329 14:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.329 14:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 70061376 kB' 'MemAvailable: 74061284 kB' 'Buffers: 2704 kB' 'Cached: 14846436 kB' 'SwapCached: 0 kB' 'Active: 11684348 kB' 'Inactive: 3781800 kB' 'Active(anon): 11232764 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620428 kB' 'Mapped: 188848 kB' 'Shmem: 10615756 kB' 'KReclaimable: 602484 kB' 'Slab: 1290316 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687832 kB' 'KernelStack: 22464 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 12702900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221592 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 14:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.330 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.330 14:49:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.330 14:49:28 -- setup/common.sh@33 -- # echo 0 00:04:09.330 14:49:28 -- setup/common.sh@33 -- # return 0 00:04:09.330 14:49:28 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.330 14:49:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:09.330 nr_hugepages=1536 00:04:09.330 14:49:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.330 resv_hugepages=0 00:04:09.330 14:49:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.330 surplus_hugepages=0 00:04:09.330 14:49:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.330 anon_hugepages=0 00:04:09.330 14:49:28 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:09.330 14:49:28 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:09.330 14:49:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.330 14:49:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.330 14:49:28 -- setup/common.sh@18 -- # local node= 00:04:09.330 14:49:28 -- setup/common.sh@19 -- # local var val 00:04:09.330 14:49:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.330 14:49:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.330 14:49:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.330 14:49:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.331 14:49:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.331 14:49:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 70061888 kB' 'MemAvailable: 74061796 kB' 'Buffers: 2704 kB' 'Cached: 14846448 kB' 'SwapCached: 0 kB' 'Active: 11684580 kB' 'Inactive: 3781800 kB' 'Active(anon): 11232996 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620616 kB' 'Mapped: 188848 kB' 'Shmem: 10615768 kB' 'KReclaimable: 602484 kB' 'Slab: 1290316 kB' 'SReclaimable: 602484 kB' 'SUnreclaim: 687832 kB' 'KernelStack: 22480 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 12702912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221592 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.331 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.331 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.332 14:49:28 -- setup/common.sh@33 -- # echo 1536 00:04:09.332 14:49:28 -- setup/common.sh@33 -- # return 0 00:04:09.332 14:49:28 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:09.332 14:49:28 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.332 14:49:28 -- setup/hugepages.sh@27 -- # local node 00:04:09.332 14:49:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.332 14:49:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.332 14:49:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.332 14:49:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.332 14:49:28 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.332 14:49:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.332 14:49:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.332 14:49:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.332 14:49:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.332 14:49:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.332 14:49:28 -- setup/common.sh@18 -- # local node=0 00:04:09.332 14:49:28 -- setup/common.sh@19 -- # local var val 00:04:09.332 14:49:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.332 14:49:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.332 14:49:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.332 14:49:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.332 14:49:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.332 14:49:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.332 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.332 14:49:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 36543264 kB' 'MemUsed: 11525132 kB' 'SwapCached: 0 kB' 'Active: 8427728 kB' 'Inactive: 288032 kB' 'Active(anon): 8256940 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 288032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8451536 kB' 'Mapped: 41512 kB' 'AnonPages: 267496 kB' 'Shmem: 7992716 kB' 'KernelStack: 11960 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322192 kB' 'Slab: 667440 kB' 'SReclaimable: 322192 kB' 'SUnreclaim: 345248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.332 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.333 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.333 14:49:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.333 14:49:28 -- setup/common.sh@33 -- # echo 0 00:04:09.333 14:49:28 -- setup/common.sh@33 -- # return 0 00:04:09.333 14:49:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.333 14:49:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.334 14:49:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.334 14:49:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:09.334 14:49:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.334 14:49:28 -- setup/common.sh@18 -- # local node=1 00:04:09.334 14:49:28 -- setup/common.sh@19 -- # local var val 00:04:09.334 14:49:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.334 14:49:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.334 14:49:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:09.334 14:49:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:09.334 14:49:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.334 14:49:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218156 kB' 'MemFree: 33518120 kB' 'MemUsed: 10700036 kB' 'SwapCached: 0 kB' 'Active: 3256880 kB' 'Inactive: 3493768 kB' 'Active(anon): 2976084 kB' 'Inactive(anon): 0 kB' 'Active(file): 280796 kB' 'Inactive(file): 3493768 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6397632 kB' 'Mapped: 147336 kB' 'AnonPages: 353136 kB' 'Shmem: 2623068 kB' 'KernelStack: 10520 kB' 'PageTables: 4852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280292 kB' 'Slab: 622876 kB' 'SReclaimable: 280292 kB' 'SUnreclaim: 342584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.334 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.334 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # continue 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.335 14:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.335 14:49:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.335 14:49:28 -- setup/common.sh@33 -- # echo 0 00:04:09.335 14:49:28 -- setup/common.sh@33 -- # return 0 00:04:09.335 14:49:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.335 14:49:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.335 14:49:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.335 14:49:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.335 14:49:28 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:09.335 node0=512 expecting 512 00:04:09.335 14:49:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.335 14:49:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.335 14:49:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.335 14:49:28 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:09.335 node1=1024 expecting 1024 00:04:09.335 14:49:28 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:09.335 00:04:09.335 real 0m3.435s 00:04:09.335 user 0m1.391s 00:04:09.335 sys 0m2.082s 00:04:09.335 14:49:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.335 14:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:09.335 ************************************ 00:04:09.335 END TEST custom_alloc 00:04:09.335 ************************************ 00:04:09.335 14:49:28 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:09.335 14:49:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.335 14:49:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.335 14:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:09.335 ************************************ 00:04:09.335 START TEST no_shrink_alloc 00:04:09.335 ************************************ 00:04:09.335 14:49:28 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:09.335 14:49:28 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:09.335 14:49:28 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.335 14:49:28 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:09.335 14:49:28 -- setup/hugepages.sh@51 -- # shift 00:04:09.335 14:49:28 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:09.335 14:49:28 -- setup/hugepages.sh@52 -- # local node_ids 00:04:09.335 14:49:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.335 14:49:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.335 14:49:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:09.335 14:49:28 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:09.335 14:49:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.335 14:49:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.335 14:49:28 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.335 14:49:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.335 14:49:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.335 14:49:28 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:09.335 14:49:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:09.335 14:49:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:09.335 14:49:28 -- setup/hugepages.sh@73 -- # return 0 00:04:09.335 14:49:28 -- setup/hugepages.sh@198 -- # setup output 00:04:09.335 14:49:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.335 14:49:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.627 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.627 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.627 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.627 14:49:31 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:12.627 14:49:31 -- setup/hugepages.sh@89 -- # local node 00:04:12.627 14:49:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.627 14:49:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.627 14:49:31 -- setup/hugepages.sh@92 -- # local surp 00:04:12.627 14:49:31 -- setup/hugepages.sh@93 -- # local resv 00:04:12.627 14:49:31 -- setup/hugepages.sh@94 -- # local anon 00:04:12.627 14:49:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.627 14:49:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.627 14:49:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.627 14:49:31 -- setup/common.sh@18 -- # local node= 00:04:12.627 14:49:31 -- setup/common.sh@19 -- # local var val 00:04:12.627 14:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.627 14:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.627 14:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.627 14:49:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.627 14:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.627 14:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71016440 kB' 'MemAvailable: 75016316 kB' 'Buffers: 2704 kB' 'Cached: 14846548 kB' 'SwapCached: 0 kB' 'Active: 11686664 kB' 'Inactive: 3781800 kB' 'Active(anon): 11235080 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621932 kB' 'Mapped: 188964 kB' 'Shmem: 10615868 kB' 'KReclaimable: 602452 kB' 'Slab: 1290212 kB' 'SReclaimable: 602452 kB' 'SUnreclaim: 687760 kB' 'KernelStack: 22544 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12703680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221768 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.627 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.628 14:49:31 -- setup/common.sh@33 -- # echo 0 00:04:12.628 14:49:31 -- setup/common.sh@33 -- # return 0 00:04:12.628 14:49:31 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.628 14:49:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.628 14:49:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.628 14:49:31 -- setup/common.sh@18 -- # local node= 00:04:12.628 14:49:31 -- setup/common.sh@19 -- # local var val 00:04:12.628 14:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.628 14:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.628 14:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.628 14:49:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.628 14:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.628 14:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71018232 kB' 'MemAvailable: 75018108 kB' 'Buffers: 2704 kB' 'Cached: 14846552 kB' 'SwapCached: 0 kB' 'Active: 11685972 kB' 'Inactive: 3781800 kB' 'Active(anon): 11234388 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621776 kB' 'Mapped: 188856 kB' 'Shmem: 10615872 kB' 'KReclaimable: 602452 kB' 'Slab: 1290172 kB' 'SReclaimable: 602452 kB' 'SUnreclaim: 687720 kB' 'KernelStack: 22528 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12703692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221736 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.628 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.628 14:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.629 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.629 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.630 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.630 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.890 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.890 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.890 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.890 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.890 14:49:31 -- setup/common.sh@33 -- # echo 0 00:04:12.890 14:49:31 -- setup/common.sh@33 -- # return 0 00:04:12.890 14:49:31 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.890 14:49:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.890 14:49:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.890 14:49:31 -- setup/common.sh@18 -- # local node= 00:04:12.890 14:49:31 -- setup/common.sh@19 -- # local var val 00:04:12.890 14:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.890 14:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.890 14:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.890 14:49:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.890 14:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.890 14:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.891 14:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71019876 kB' 'MemAvailable: 75019752 kB' 'Buffers: 2704 kB' 'Cached: 14846564 kB' 'SwapCached: 0 kB' 'Active: 11685992 kB' 'Inactive: 3781800 kB' 'Active(anon): 11234408 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621776 kB' 'Mapped: 188856 kB' 'Shmem: 10615884 kB' 'KReclaimable: 602452 kB' 'Slab: 1290172 kB' 'SReclaimable: 602452 kB' 'SUnreclaim: 687720 kB' 'KernelStack: 22528 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12703708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221736 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 14:49:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 14:49:31 -- setup/common.sh@33 -- # echo 0 00:04:12.892 14:49:31 -- setup/common.sh@33 -- # return 0 00:04:12.892 14:49:31 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.892 14:49:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.892 nr_hugepages=1024 00:04:12.892 14:49:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.892 resv_hugepages=0 00:04:12.892 14:49:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.892 surplus_hugepages=0 00:04:12.892 14:49:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.892 anon_hugepages=0 00:04:12.892 14:49:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.892 14:49:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.892 14:49:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.892 14:49:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.892 14:49:31 -- setup/common.sh@18 -- # local node= 00:04:12.892 14:49:31 -- setup/common.sh@19 -- # local var val 00:04:12.892 14:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.892 14:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.892 14:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.892 14:49:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.892 14:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.892 14:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71019876 kB' 'MemAvailable: 75019752 kB' 'Buffers: 2704 kB' 'Cached: 14846576 kB' 'SwapCached: 0 kB' 'Active: 11685952 kB' 'Inactive: 3781800 kB' 'Active(anon): 11234368 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621680 kB' 'Mapped: 188856 kB' 'Shmem: 10615896 kB' 'KReclaimable: 602452 kB' 'Slab: 1290172 kB' 'SReclaimable: 602452 kB' 'SUnreclaim: 687720 kB' 'KernelStack: 22512 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12703724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221736 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.892 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.892 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 14:49:31 -- setup/common.sh@33 -- # echo 1024 00:04:12.894 14:49:31 -- setup/common.sh@33 -- # return 0 00:04:12.894 14:49:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.894 14:49:31 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.894 14:49:31 -- setup/hugepages.sh@27 -- # local node 00:04:12.894 14:49:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.894 14:49:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.894 14:49:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.894 14:49:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:12.894 14:49:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.894 14:49:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.894 14:49:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.894 14:49:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.894 14:49:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.894 14:49:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.894 14:49:31 -- setup/common.sh@18 -- # local node=0 00:04:12.894 14:49:31 -- setup/common.sh@19 -- # local var val 00:04:12.894 14:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.894 14:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.894 14:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.894 14:49:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.894 14:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.894 14:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 35508676 kB' 'MemUsed: 12559720 kB' 'SwapCached: 0 kB' 'Active: 8429400 kB' 'Inactive: 288032 kB' 'Active(anon): 8258612 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 288032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8451660 kB' 'Mapped: 41520 kB' 'AnonPages: 269012 kB' 'Shmem: 7992840 kB' 'KernelStack: 12024 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322160 kB' 'Slab: 667204 kB' 'SReclaimable: 322160 kB' 'SUnreclaim: 345044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # continue 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 14:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 14:49:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 14:49:31 -- setup/common.sh@33 -- # echo 0 00:04:12.895 14:49:31 -- setup/common.sh@33 -- # return 0 00:04:12.895 14:49:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.895 14:49:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.895 14:49:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.895 14:49:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.895 14:49:31 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.895 node0=1024 expecting 1024 00:04:12.895 14:49:31 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.895 14:49:31 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:12.895 14:49:31 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:12.895 14:49:31 -- setup/hugepages.sh@202 -- # setup output 00:04:12.895 14:49:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.895 14:49:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.188 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.188 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.188 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.188 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:16.188 14:49:34 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:16.188 14:49:34 -- setup/hugepages.sh@89 -- # local node 00:04:16.188 14:49:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.188 14:49:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.188 14:49:34 -- setup/hugepages.sh@92 -- # local surp 00:04:16.188 14:49:34 -- setup/hugepages.sh@93 -- # local resv 00:04:16.188 14:49:34 -- setup/hugepages.sh@94 -- # local anon 00:04:16.188 14:49:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.188 14:49:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.188 14:49:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.188 14:49:34 -- setup/common.sh@18 -- # local node= 00:04:16.188 14:49:34 -- setup/common.sh@19 -- # local var val 00:04:16.188 14:49:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.188 14:49:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.188 14:49:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.188 14:49:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.188 14:49:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.188 14:49:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71047388 kB' 'MemAvailable: 75047264 kB' 'Buffers: 2704 kB' 'Cached: 14846668 kB' 'SwapCached: 0 kB' 'Active: 11688080 kB' 'Inactive: 3781800 kB' 'Active(anon): 11236496 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623784 kB' 'Mapped: 188900 kB' 'Shmem: 10615988 kB' 'KReclaimable: 602452 kB' 'Slab: 1289928 kB' 'SReclaimable: 602452 kB' 'SUnreclaim: 687476 kB' 'KernelStack: 22640 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12709080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221752 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.188 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.189 14:49:34 -- setup/common.sh@33 -- # echo 0 00:04:16.189 14:49:34 -- setup/common.sh@33 -- # return 0 00:04:16.189 14:49:34 -- setup/hugepages.sh@97 -- # anon=0 00:04:16.189 14:49:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.189 14:49:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.189 14:49:34 -- setup/common.sh@18 -- # local node= 00:04:16.189 14:49:34 -- setup/common.sh@19 -- # local var val 00:04:16.189 14:49:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.189 14:49:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.189 14:49:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.189 14:49:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.189 14:49:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.189 14:49:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71049992 kB' 'MemAvailable: 75049868 kB' 'Buffers: 2704 kB' 'Cached: 14846672 kB' 'SwapCached: 0 kB' 'Active: 11687752 kB' 'Inactive: 3781800 kB' 'Active(anon): 11236168 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623420 kB' 'Mapped: 188868 kB' 'Shmem: 10615992 kB' 'KReclaimable: 602452 kB' 'Slab: 1289888 kB' 'SReclaimable: 602452 kB' 'SUnreclaim: 687436 kB' 'KernelStack: 22544 kB' 'PageTables: 9164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12707468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221752 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.189 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.190 14:49:34 -- setup/common.sh@33 -- # echo 0 00:04:16.190 14:49:34 -- setup/common.sh@33 -- # return 0 00:04:16.190 14:49:34 -- setup/hugepages.sh@99 -- # surp=0 00:04:16.190 14:49:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.190 14:49:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.190 14:49:34 -- setup/common.sh@18 -- # local node= 00:04:16.190 14:49:34 -- setup/common.sh@19 -- # local var val 00:04:16.190 14:49:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.190 14:49:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.190 14:49:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.190 14:49:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.190 14:49:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.190 14:49:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 14:49:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71051192 kB' 'MemAvailable: 75051068 kB' 'Buffers: 2704 kB' 'Cached: 14846672 kB' 'SwapCached: 0 kB' 'Active: 11687996 kB' 'Inactive: 3781800 kB' 'Active(anon): 11236412 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623692 kB' 'Mapped: 188860 kB' 'Shmem: 10615992 kB' 'KReclaimable: 602452 kB' 'Slab: 1289888 kB' 'SReclaimable: 602452 kB' 'SUnreclaim: 687436 kB' 'KernelStack: 22640 kB' 'PageTables: 9244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12709108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221768 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.191 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.192 14:49:34 -- setup/common.sh@33 -- # echo 0 00:04:16.192 14:49:34 -- setup/common.sh@33 -- # return 0 00:04:16.192 14:49:34 -- setup/hugepages.sh@100 -- # resv=0 00:04:16.192 14:49:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.192 nr_hugepages=1024 00:04:16.192 14:49:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.192 resv_hugepages=0 00:04:16.192 14:49:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.192 surplus_hugepages=0 00:04:16.192 14:49:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.192 anon_hugepages=0 00:04:16.192 14:49:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.192 14:49:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.192 14:49:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.192 14:49:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.192 14:49:34 -- setup/common.sh@18 -- # local node= 00:04:16.192 14:49:34 -- setup/common.sh@19 -- # local var val 00:04:16.192 14:49:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.192 14:49:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.192 14:49:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.192 14:49:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.192 14:49:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.192 14:49:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71052712 kB' 'MemAvailable: 75052588 kB' 'Buffers: 2704 kB' 'Cached: 14846696 kB' 'SwapCached: 0 kB' 'Active: 11687668 kB' 'Inactive: 3781800 kB' 'Active(anon): 11236084 kB' 'Inactive(anon): 0 kB' 'Active(file): 451584 kB' 'Inactive(file): 3781800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623348 kB' 'Mapped: 188868 kB' 'Shmem: 10616016 kB' 'KReclaimable: 602452 kB' 'Slab: 1289872 kB' 'SReclaimable: 602452 kB' 'SUnreclaim: 687420 kB' 'KernelStack: 22624 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12709124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221752 kB' 'VmallocChunk: 0 kB' 'Percpu: 114240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4391892 kB' 'DirectMap2M: 42473472 kB' 'DirectMap1G: 54525952 kB' 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.192 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.192 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.193 14:49:34 -- setup/common.sh@33 -- # echo 1024 00:04:16.193 14:49:34 -- setup/common.sh@33 -- # return 0 00:04:16.193 14:49:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.193 14:49:34 -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.193 14:49:34 -- setup/hugepages.sh@27 -- # local node 00:04:16.193 14:49:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.193 14:49:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.193 14:49:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.193 14:49:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:16.193 14:49:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.193 14:49:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.193 14:49:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.193 14:49:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.193 14:49:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.193 14:49:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.193 14:49:34 -- setup/common.sh@18 -- # local node=0 00:04:16.193 14:49:34 -- setup/common.sh@19 -- # local var val 00:04:16.193 14:49:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.193 14:49:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.193 14:49:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.193 14:49:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.193 14:49:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.193 14:49:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 35513760 kB' 'MemUsed: 12554636 kB' 'SwapCached: 0 kB' 'Active: 8431264 kB' 'Inactive: 288032 kB' 'Active(anon): 8260476 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 288032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8451768 kB' 'Mapped: 41532 kB' 'AnonPages: 270812 kB' 'Shmem: 7992948 kB' 'KernelStack: 12056 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322160 kB' 'Slab: 666928 kB' 'SReclaimable: 322160 kB' 'SUnreclaim: 344768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # continue 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 14:49:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 14:49:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 14:49:34 -- setup/common.sh@33 -- # echo 0 00:04:16.194 14:49:34 -- setup/common.sh@33 -- # return 0 00:04:16.194 14:49:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.194 14:49:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.194 14:49:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.194 14:49:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.194 14:49:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.194 node0=1024 expecting 1024 00:04:16.195 14:49:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.195 00:04:16.195 real 0m6.796s 00:04:16.195 user 0m2.655s 00:04:16.195 sys 0m4.245s 00:04:16.195 14:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.195 14:49:34 -- common/autotest_common.sh@10 -- # set +x 00:04:16.195 ************************************ 00:04:16.195 END TEST no_shrink_alloc 00:04:16.195 ************************************ 00:04:16.195 14:49:34 -- setup/hugepages.sh@217 -- # clear_hp 00:04:16.195 14:49:34 -- setup/hugepages.sh@37 -- # local node hp 00:04:16.195 14:49:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:16.195 14:49:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.195 14:49:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:16.195 14:49:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.195 14:49:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:16.195 14:49:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:16.195 14:49:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.195 14:49:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:16.195 14:49:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.195 14:49:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:16.195 14:49:34 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:16.195 14:49:34 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:16.195 00:04:16.195 real 0m24.848s 00:04:16.195 user 0m9.345s 00:04:16.195 sys 0m14.856s 00:04:16.195 14:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.195 14:49:34 -- common/autotest_common.sh@10 -- # set +x 00:04:16.195 ************************************ 00:04:16.195 END TEST hugepages 00:04:16.195 ************************************ 00:04:16.195 14:49:35 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:16.195 14:49:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.195 14:49:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.195 14:49:35 -- common/autotest_common.sh@10 -- # set +x 00:04:16.195 ************************************ 00:04:16.195 START TEST driver 00:04:16.195 ************************************ 00:04:16.195 14:49:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:16.453 * Looking for test storage... 00:04:16.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:16.453 14:49:35 -- setup/driver.sh@68 -- # setup reset 00:04:16.453 14:49:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.453 14:49:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.724 14:49:39 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:21.725 14:49:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.725 14:49:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.725 14:49:39 -- common/autotest_common.sh@10 -- # set +x 00:04:21.725 ************************************ 00:04:21.725 START TEST guess_driver 00:04:21.725 ************************************ 00:04:21.725 14:49:39 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:21.725 14:49:39 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:21.725 14:49:39 -- setup/driver.sh@47 -- # local fail=0 00:04:21.725 14:49:39 -- setup/driver.sh@49 -- # pick_driver 00:04:21.725 14:49:39 -- setup/driver.sh@36 -- # vfio 00:04:21.725 14:49:39 -- setup/driver.sh@21 -- # local iommu_grups 00:04:21.725 14:49:39 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:21.725 14:49:39 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:21.725 14:49:39 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:21.725 14:49:39 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:21.725 14:49:39 -- setup/driver.sh@29 -- # (( 223 > 0 )) 00:04:21.725 14:49:39 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:21.725 14:49:39 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:21.725 14:49:39 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:21.725 14:49:39 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:21.725 14:49:39 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:21.725 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:21.725 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:21.725 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:21.725 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:21.725 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:21.725 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:21.725 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:21.725 14:49:39 -- setup/driver.sh@30 -- # return 0 00:04:21.725 14:49:39 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:21.725 14:49:39 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:21.725 14:49:39 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:21.725 14:49:39 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:21.725 Looking for driver=vfio-pci 00:04:21.725 14:49:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.725 14:49:39 -- setup/driver.sh@45 -- # setup output config 00:04:21.725 14:49:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.725 14:49:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.261 14:49:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.261 14:49:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.261 14:49:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.199 14:49:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.199 14:49:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.199 14:49:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.199 14:49:43 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:25.199 14:49:43 -- setup/driver.sh@65 -- # setup reset 00:04:25.199 14:49:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.199 14:49:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.535 00:04:30.535 real 0m8.849s 00:04:30.535 user 0m2.558s 00:04:30.535 sys 0m4.693s 00:04:30.535 14:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.535 14:49:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.535 ************************************ 00:04:30.535 END TEST guess_driver 00:04:30.535 ************************************ 00:04:30.535 00:04:30.535 real 0m13.447s 00:04:30.535 user 0m3.839s 00:04:30.535 sys 0m7.179s 00:04:30.535 14:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.535 14:49:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.535 ************************************ 00:04:30.535 END TEST driver 00:04:30.535 ************************************ 00:04:30.535 14:49:48 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:30.535 14:49:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.535 14:49:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.535 14:49:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.535 ************************************ 00:04:30.535 START TEST devices 00:04:30.535 ************************************ 00:04:30.535 14:49:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:30.535 * Looking for test storage... 00:04:30.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:30.535 14:49:48 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:30.535 14:49:48 -- setup/devices.sh@192 -- # setup reset 00:04:30.535 14:49:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.535 14:49:48 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.827 14:49:51 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:33.827 14:49:51 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:33.827 14:49:51 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:33.827 14:49:51 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:33.827 14:49:51 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:33.827 14:49:51 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:33.827 14:49:51 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:33.827 14:49:51 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.827 14:49:51 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:33.827 14:49:51 -- setup/devices.sh@196 -- # blocks=() 00:04:33.827 14:49:51 -- setup/devices.sh@196 -- # declare -a blocks 00:04:33.827 14:49:51 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:33.827 14:49:51 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:33.827 14:49:51 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:33.827 14:49:51 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.827 14:49:51 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:33.827 14:49:51 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.827 14:49:51 -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:04:33.827 14:49:51 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:04:33.827 14:49:51 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:33.827 14:49:51 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:33.827 14:49:51 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:33.827 No valid GPT data, bailing 00:04:33.827 14:49:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.827 14:49:52 -- scripts/common.sh@393 -- # pt= 00:04:33.827 14:49:52 -- scripts/common.sh@394 -- # return 1 00:04:33.827 14:49:52 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:33.827 14:49:52 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:33.827 14:49:52 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:33.827 14:49:52 -- setup/common.sh@80 -- # echo 1000204886016 00:04:33.827 14:49:52 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:33.827 14:49:52 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.827 14:49:52 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:04:33.827 14:49:52 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:33.827 14:49:52 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:33.827 14:49:52 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:33.827 14:49:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.827 14:49:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.827 14:49:52 -- common/autotest_common.sh@10 -- # set +x 00:04:33.827 ************************************ 00:04:33.827 START TEST nvme_mount 00:04:33.827 ************************************ 00:04:33.827 14:49:52 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:33.827 14:49:52 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:33.827 14:49:52 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:33.827 14:49:52 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.827 14:49:52 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.827 14:49:52 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:33.827 14:49:52 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.827 14:49:52 -- setup/common.sh@40 -- # local part_no=1 00:04:33.827 14:49:52 -- setup/common.sh@41 -- # local size=1073741824 00:04:33.827 14:49:52 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.827 14:49:52 -- setup/common.sh@44 -- # parts=() 00:04:33.827 14:49:52 -- setup/common.sh@44 -- # local parts 00:04:33.827 14:49:52 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.827 14:49:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.827 14:49:52 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.827 14:49:52 -- setup/common.sh@46 -- # (( part++ )) 00:04:33.827 14:49:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.827 14:49:52 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:33.827 14:49:52 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.827 14:49:52 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:34.395 Creating new GPT entries in memory. 00:04:34.395 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.395 other utilities. 00:04:34.395 14:49:53 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.395 14:49:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.395 14:49:53 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.395 14:49:53 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.396 14:49:53 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:35.334 Creating new GPT entries in memory. 00:04:35.334 The operation has completed successfully. 00:04:35.334 14:49:54 -- setup/common.sh@57 -- # (( part++ )) 00:04:35.334 14:49:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.334 14:49:54 -- setup/common.sh@62 -- # wait 3056702 00:04:35.334 14:49:54 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.334 14:49:54 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:35.334 14:49:54 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.334 14:49:54 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:35.334 14:49:54 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:35.593 14:49:54 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.593 14:49:54 -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.593 14:49:54 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:35.593 14:49:54 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:35.593 14:49:54 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.593 14:49:54 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.593 14:49:54 -- setup/devices.sh@53 -- # local found=0 00:04:35.593 14:49:54 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.593 14:49:54 -- setup/devices.sh@56 -- # : 00:04:35.593 14:49:54 -- setup/devices.sh@59 -- # local pci status 00:04:35.593 14:49:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.593 14:49:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:35.593 14:49:54 -- setup/devices.sh@47 -- # setup output config 00:04:35.593 14:49:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.593 14:49:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:38.887 14:49:57 -- setup/devices.sh@63 -- # found=1 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.887 14:49:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.887 14:49:57 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:38.887 14:49:57 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.887 14:49:57 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.887 14:49:57 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.887 14:49:57 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:38.887 14:49:57 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.887 14:49:57 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.887 14:49:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.887 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.887 14:49:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.887 14:49:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.887 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:38.887 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:38.887 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:38.887 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:38.887 14:49:57 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:38.887 14:49:57 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:38.887 14:49:57 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.887 14:49:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:38.887 14:49:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:39.146 14:49:57 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.146 14:49:57 -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.146 14:49:57 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:39.146 14:49:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:39.146 14:49:57 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.146 14:49:57 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.146 14:49:57 -- setup/devices.sh@53 -- # local found=0 00:04:39.146 14:49:57 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.146 14:49:57 -- setup/devices.sh@56 -- # : 00:04:39.146 14:49:57 -- setup/devices.sh@59 -- # local pci status 00:04:39.146 14:49:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.146 14:49:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:39.146 14:49:57 -- setup/devices.sh@47 -- # setup output config 00:04:39.146 14:49:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.146 14:49:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:42.437 14:50:00 -- setup/devices.sh@63 -- # found=1 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.437 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.438 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.438 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.438 14:50:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:42.438 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.438 14:50:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.438 14:50:00 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:42.438 14:50:00 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.438 14:50:00 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.438 14:50:00 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.438 14:50:00 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.438 14:50:00 -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:04:42.438 14:50:00 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:42.438 14:50:00 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:42.438 14:50:00 -- setup/devices.sh@50 -- # local mount_point= 00:04:42.438 14:50:00 -- setup/devices.sh@51 -- # local test_file= 00:04:42.438 14:50:00 -- setup/devices.sh@53 -- # local found=0 00:04:42.438 14:50:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:42.438 14:50:00 -- setup/devices.sh@59 -- # local pci status 00:04:42.438 14:50:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.438 14:50:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:42.438 14:50:00 -- setup/devices.sh@47 -- # setup output config 00:04:42.438 14:50:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.438 14:50:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:44.980 14:50:03 -- setup/devices.sh@63 -- # found=1 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.980 14:50:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.980 14:50:03 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:44.980 14:50:03 -- setup/devices.sh@68 -- # return 0 00:04:44.980 14:50:03 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:44.980 14:50:03 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.980 14:50:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.980 14:50:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:44.980 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:44.980 00:04:44.980 real 0m11.584s 00:04:44.980 user 0m3.311s 00:04:44.980 sys 0m5.874s 00:04:44.980 14:50:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.980 14:50:03 -- common/autotest_common.sh@10 -- # set +x 00:04:44.980 ************************************ 00:04:44.980 END TEST nvme_mount 00:04:44.980 ************************************ 00:04:44.980 14:50:03 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:44.980 14:50:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.980 14:50:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.980 14:50:03 -- common/autotest_common.sh@10 -- # set +x 00:04:44.980 ************************************ 00:04:44.980 START TEST dm_mount 00:04:44.980 ************************************ 00:04:44.980 14:50:03 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:44.980 14:50:03 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:44.980 14:50:03 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:44.980 14:50:03 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:44.980 14:50:03 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:44.980 14:50:03 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:44.980 14:50:03 -- setup/common.sh@40 -- # local part_no=2 00:04:44.980 14:50:03 -- setup/common.sh@41 -- # local size=1073741824 00:04:44.980 14:50:03 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:44.980 14:50:03 -- setup/common.sh@44 -- # parts=() 00:04:44.980 14:50:03 -- setup/common.sh@44 -- # local parts 00:04:44.980 14:50:03 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:44.980 14:50:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.980 14:50:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:44.980 14:50:03 -- setup/common.sh@46 -- # (( part++ )) 00:04:44.980 14:50:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.980 14:50:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:44.980 14:50:03 -- setup/common.sh@46 -- # (( part++ )) 00:04:44.980 14:50:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.980 14:50:03 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:44.980 14:50:03 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:44.980 14:50:03 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:45.918 Creating new GPT entries in memory. 00:04:45.918 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:45.918 other utilities. 00:04:45.918 14:50:04 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:45.918 14:50:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.918 14:50:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.918 14:50:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.918 14:50:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:47.297 Creating new GPT entries in memory. 00:04:47.297 The operation has completed successfully. 00:04:47.297 14:50:05 -- setup/common.sh@57 -- # (( part++ )) 00:04:47.297 14:50:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.297 14:50:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.297 14:50:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.297 14:50:05 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:48.234 The operation has completed successfully. 00:04:48.234 14:50:06 -- setup/common.sh@57 -- # (( part++ )) 00:04:48.234 14:50:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.234 14:50:06 -- setup/common.sh@62 -- # wait 3061523 00:04:48.234 14:50:06 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:48.234 14:50:06 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.234 14:50:06 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:48.234 14:50:06 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:48.234 14:50:06 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:48.234 14:50:06 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.234 14:50:06 -- setup/devices.sh@161 -- # break 00:04:48.234 14:50:06 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.234 14:50:06 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:48.234 14:50:06 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:48.234 14:50:06 -- setup/devices.sh@166 -- # dm=dm-0 00:04:48.234 14:50:06 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:48.234 14:50:06 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:48.234 14:50:06 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.234 14:50:06 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:48.235 14:50:06 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.235 14:50:06 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.235 14:50:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:48.235 14:50:06 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.235 14:50:06 -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:48.235 14:50:06 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:48.235 14:50:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:48.235 14:50:06 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.235 14:50:06 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:48.235 14:50:06 -- setup/devices.sh@53 -- # local found=0 00:04:48.235 14:50:06 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:48.235 14:50:06 -- setup/devices.sh@56 -- # : 00:04:48.235 14:50:06 -- setup/devices.sh@59 -- # local pci status 00:04:48.235 14:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.235 14:50:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:48.235 14:50:06 -- setup/devices.sh@47 -- # setup output config 00:04:48.235 14:50:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.235 14:50:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:51.524 14:50:09 -- setup/devices.sh@63 -- # found=1 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:51.524 14:50:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.524 14:50:09 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:51.524 14:50:09 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.524 14:50:09 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:51.524 14:50:09 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:51.524 14:50:09 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.524 14:50:10 -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:51.524 14:50:10 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:51.524 14:50:10 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:51.524 14:50:10 -- setup/devices.sh@50 -- # local mount_point= 00:04:51.524 14:50:10 -- setup/devices.sh@51 -- # local test_file= 00:04:51.524 14:50:10 -- setup/devices.sh@53 -- # local found=0 00:04:51.524 14:50:10 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:51.524 14:50:10 -- setup/devices.sh@59 -- # local pci status 00:04:51.524 14:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.524 14:50:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:51.524 14:50:10 -- setup/devices.sh@47 -- # setup output config 00:04:51.524 14:50:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.524 14:50:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.816 14:50:12 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:54.816 14:50:12 -- setup/devices.sh@63 -- # found=1 00:04:54.816 14:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.816 14:50:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.816 14:50:13 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:54.816 14:50:13 -- setup/devices.sh@68 -- # return 0 00:04:54.816 14:50:13 -- setup/devices.sh@187 -- # cleanup_dm 00:04:54.816 14:50:13 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.816 14:50:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:54.816 14:50:13 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:54.816 14:50:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:54.816 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.816 14:50:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:54.816 00:04:54.816 real 0m9.571s 00:04:54.816 user 0m2.398s 00:04:54.816 sys 0m4.155s 00:04:54.816 14:50:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.816 14:50:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.816 ************************************ 00:04:54.816 END TEST dm_mount 00:04:54.816 ************************************ 00:04:54.816 14:50:13 -- setup/devices.sh@1 -- # cleanup 00:04:54.816 14:50:13 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:54.816 14:50:13 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.816 14:50:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:54.816 14:50:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:54.816 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:54.816 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:54.816 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:54.816 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:54.816 14:50:13 -- setup/devices.sh@12 -- # cleanup_dm 00:04:54.816 14:50:13 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.816 14:50:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:54.816 14:50:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.816 14:50:13 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:54.816 00:04:54.816 real 0m25.091s 00:04:54.816 user 0m7.084s 00:04:54.816 sys 0m12.458s 00:04:54.816 14:50:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.816 14:50:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.816 ************************************ 00:04:54.816 END TEST devices 00:04:54.816 ************************************ 00:04:54.816 00:04:54.816 real 1m26.282s 00:04:54.816 user 0m27.968s 00:04:54.816 sys 0m48.337s 00:04:54.816 14:50:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.816 14:50:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.816 ************************************ 00:04:54.816 END TEST setup.sh 00:04:54.816 ************************************ 00:04:55.076 14:50:13 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:58.366 Hugepages 00:04:58.366 node hugesize free / total 00:04:58.366 node0 1048576kB 0 / 0 00:04:58.366 node0 2048kB 2048 / 2048 00:04:58.366 node1 1048576kB 0 / 0 00:04:58.366 node1 2048kB 0 / 0 00:04:58.366 00:04:58.366 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:58.366 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:58.366 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:58.366 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:58.366 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:58.366 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:58.366 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:58.366 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:58.366 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:58.366 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:58.366 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:58.366 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:58.366 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:58.366 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:58.366 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:58.366 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:58.366 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:58.366 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:58.367 14:50:16 -- spdk/autotest.sh@141 -- # uname -s 00:04:58.367 14:50:16 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:58.367 14:50:16 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:58.367 14:50:16 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.902 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:01.161 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:01.161 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:01.161 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:01.161 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:01.161 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:01.162 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:01.162 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:01.162 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:01.162 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:01.162 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:01.162 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:01.162 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:01.162 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:01.162 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:01.162 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:02.099 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:05:02.099 14:50:20 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:03.037 14:50:21 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:03.037 14:50:21 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:03.037 14:50:21 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.037 14:50:21 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:03.037 14:50:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:03.037 14:50:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:03.037 14:50:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.037 14:50:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:03.037 14:50:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:03.296 14:50:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:03.296 14:50:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:05:03.296 14:50:21 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:06.587 Waiting for block devices as requested 00:05:06.587 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:05:06.587 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:06.587 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:06.587 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:06.846 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:06.846 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:06.846 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:07.105 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:07.105 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:07.105 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:07.105 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:07.365 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:07.365 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:07.365 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:07.625 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:07.625 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:07.625 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:07.884 14:50:26 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:07.884 14:50:26 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:05:07.884 14:50:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:07.884 14:50:26 -- common/autotest_common.sh@1487 -- # grep 0000:86:00.0/nvme/nvme 00:05:07.884 14:50:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:05:07.884 14:50:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:05:07.884 14:50:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:05:07.884 14:50:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:07.884 14:50:26 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:07.884 14:50:26 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:07.884 14:50:26 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:07.884 14:50:26 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:07.884 14:50:26 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:07.884 14:50:26 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:05:07.884 14:50:26 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:07.884 14:50:26 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:07.884 14:50:26 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:07.884 14:50:26 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:07.884 14:50:26 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:07.884 14:50:26 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:07.884 14:50:26 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:07.884 14:50:26 -- common/autotest_common.sh@1542 -- # continue 00:05:07.884 14:50:26 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:07.884 14:50:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:07.884 14:50:26 -- common/autotest_common.sh@10 -- # set +x 00:05:07.884 14:50:26 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:07.884 14:50:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:07.884 14:50:26 -- common/autotest_common.sh@10 -- # set +x 00:05:07.884 14:50:26 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:11.286 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.286 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:12.223 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:05:12.223 14:50:30 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:12.223 14:50:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:12.223 14:50:30 -- common/autotest_common.sh@10 -- # set +x 00:05:12.223 14:50:30 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:12.223 14:50:30 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:12.223 14:50:30 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:12.223 14:50:30 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:12.223 14:50:30 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:12.223 14:50:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:12.223 14:50:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:12.223 14:50:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:12.223 14:50:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.223 14:50:30 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.223 14:50:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:12.223 14:50:30 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:12.223 14:50:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:05:12.223 14:50:30 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:12.223 14:50:30 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:05:12.223 14:50:30 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:12.223 14:50:30 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:12.223 14:50:30 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:12.223 14:50:30 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:86:00.0 00:05:12.223 14:50:30 -- common/autotest_common.sh@1577 -- # [[ -z 0000:86:00.0 ]] 00:05:12.223 14:50:30 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3071933 00:05:12.223 14:50:30 -- common/autotest_common.sh@1583 -- # waitforlisten 3071933 00:05:12.223 14:50:30 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.223 14:50:30 -- common/autotest_common.sh@819 -- # '[' -z 3071933 ']' 00:05:12.223 14:50:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.223 14:50:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.223 14:50:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.223 14:50:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.223 14:50:30 -- common/autotest_common.sh@10 -- # set +x 00:05:12.223 [2024-06-11 14:50:31.040134] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:12.223 [2024-06-11 14:50:31.040190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071933 ] 00:05:12.482 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.482 [2024-06-11 14:50:31.119735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.482 [2024-06-11 14:50:31.209272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.482 [2024-06-11 14:50:31.209432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.419 14:50:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:13.419 14:50:31 -- common/autotest_common.sh@852 -- # return 0 00:05:13.419 14:50:31 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:13.419 14:50:31 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:13.419 14:50:31 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:05:16.708 nvme0n1 00:05:16.708 14:50:34 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:16.708 [2024-06-11 14:50:35.198948] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:16.708 request: 00:05:16.708 { 00:05:16.708 "nvme_ctrlr_name": "nvme0", 00:05:16.708 "password": "test", 00:05:16.708 "method": "bdev_nvme_opal_revert", 00:05:16.708 "req_id": 1 00:05:16.708 } 00:05:16.708 Got JSON-RPC error response 00:05:16.708 response: 00:05:16.708 { 00:05:16.708 "code": -32602, 00:05:16.708 "message": "Invalid parameters" 00:05:16.708 } 00:05:16.708 14:50:35 -- common/autotest_common.sh@1589 -- # true 00:05:16.708 14:50:35 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:16.708 14:50:35 -- common/autotest_common.sh@1593 -- # killprocess 3071933 00:05:16.708 14:50:35 -- common/autotest_common.sh@926 -- # '[' -z 3071933 ']' 00:05:16.708 14:50:35 -- common/autotest_common.sh@930 -- # kill -0 3071933 00:05:16.708 14:50:35 -- common/autotest_common.sh@931 -- # uname 00:05:16.708 14:50:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:16.708 14:50:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3071933 00:05:16.708 14:50:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:16.708 14:50:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:16.708 14:50:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3071933' 00:05:16.708 killing process with pid 3071933 00:05:16.708 14:50:35 -- common/autotest_common.sh@945 -- # kill 3071933 00:05:16.708 14:50:35 -- common/autotest_common.sh@950 -- # wait 3071933 00:05:18.610 14:50:36 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:18.610 14:50:36 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:18.610 14:50:36 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:18.610 14:50:36 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:18.610 14:50:36 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:18.610 14:50:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:18.610 14:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:18.610 14:50:36 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:18.610 14:50:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.610 14:50:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.610 14:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:18.610 ************************************ 00:05:18.610 START TEST env 00:05:18.610 ************************************ 00:05:18.610 14:50:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:18.610 * Looking for test storage... 00:05:18.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:18.610 14:50:37 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:18.610 14:50:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.610 14:50:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.610 14:50:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.610 ************************************ 00:05:18.610 START TEST env_memory 00:05:18.610 ************************************ 00:05:18.610 14:50:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:18.610 00:05:18.610 00:05:18.610 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.610 http://cunit.sourceforge.net/ 00:05:18.610 00:05:18.610 00:05:18.610 Suite: memory 00:05:18.610 Test: alloc and free memory map ...[2024-06-11 14:50:37.116464] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:18.610 passed 00:05:18.610 Test: mem map translation ...[2024-06-11 14:50:37.147615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:18.610 [2024-06-11 14:50:37.147635] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:18.610 [2024-06-11 14:50:37.147688] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:18.610 [2024-06-11 14:50:37.147698] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:18.610 passed 00:05:18.610 Test: mem map registration ...[2024-06-11 14:50:37.209621] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:18.610 [2024-06-11 14:50:37.209638] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:18.610 passed 00:05:18.610 Test: mem map adjacent registrations ...passed 00:05:18.610 00:05:18.610 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.610 suites 1 1 n/a 0 0 00:05:18.610 tests 4 4 4 0 0 00:05:18.610 asserts 152 152 152 0 n/a 00:05:18.610 00:05:18.610 Elapsed time = 0.211 seconds 00:05:18.610 00:05:18.610 real 0m0.223s 00:05:18.610 user 0m0.213s 00:05:18.610 sys 0m0.009s 00:05:18.610 14:50:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.610 14:50:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.610 ************************************ 00:05:18.610 END TEST env_memory 00:05:18.610 ************************************ 00:05:18.610 14:50:37 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:18.610 14:50:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.610 14:50:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.610 14:50:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.610 ************************************ 00:05:18.610 START TEST env_vtophys 00:05:18.610 ************************************ 00:05:18.610 14:50:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:18.610 EAL: lib.eal log level changed from notice to debug 00:05:18.610 EAL: Detected lcore 0 as core 0 on socket 0 00:05:18.610 EAL: Detected lcore 1 as core 1 on socket 0 00:05:18.610 EAL: Detected lcore 2 as core 2 on socket 0 00:05:18.610 EAL: Detected lcore 3 as core 3 on socket 0 00:05:18.610 EAL: Detected lcore 4 as core 4 on socket 0 00:05:18.610 EAL: Detected lcore 5 as core 5 on socket 0 00:05:18.610 EAL: Detected lcore 6 as core 6 on socket 0 00:05:18.610 EAL: Detected lcore 7 as core 8 on socket 0 00:05:18.610 EAL: Detected lcore 8 as core 9 on socket 0 00:05:18.610 EAL: Detected lcore 9 as core 10 on socket 0 00:05:18.610 EAL: Detected lcore 10 as core 11 on socket 0 00:05:18.610 EAL: Detected lcore 11 as core 12 on socket 0 00:05:18.610 EAL: Detected lcore 12 as core 13 on socket 0 00:05:18.610 EAL: Detected lcore 13 as core 14 on socket 0 00:05:18.610 EAL: Detected lcore 14 as core 16 on socket 0 00:05:18.610 EAL: Detected lcore 15 as core 17 on socket 0 00:05:18.610 EAL: Detected lcore 16 as core 18 on socket 0 00:05:18.610 EAL: Detected lcore 17 as core 19 on socket 0 00:05:18.610 EAL: Detected lcore 18 as core 20 on socket 0 00:05:18.610 EAL: Detected lcore 19 as core 21 on socket 0 00:05:18.610 EAL: Detected lcore 20 as core 22 on socket 0 00:05:18.610 EAL: Detected lcore 21 as core 24 on socket 0 00:05:18.610 EAL: Detected lcore 22 as core 25 on socket 0 00:05:18.610 EAL: Detected lcore 23 as core 26 on socket 0 00:05:18.610 EAL: Detected lcore 24 as core 27 on socket 0 00:05:18.610 EAL: Detected lcore 25 as core 28 on socket 0 00:05:18.610 EAL: Detected lcore 26 as core 29 on socket 0 00:05:18.610 EAL: Detected lcore 27 as core 30 on socket 0 00:05:18.610 EAL: Detected lcore 28 as core 0 on socket 1 00:05:18.610 EAL: Detected lcore 29 as core 1 on socket 1 00:05:18.610 EAL: Detected lcore 30 as core 2 on socket 1 00:05:18.610 EAL: Detected lcore 31 as core 3 on socket 1 00:05:18.610 EAL: Detected lcore 32 as core 4 on socket 1 00:05:18.610 EAL: Detected lcore 33 as core 5 on socket 1 00:05:18.610 EAL: Detected lcore 34 as core 6 on socket 1 00:05:18.610 EAL: Detected lcore 35 as core 8 on socket 1 00:05:18.611 EAL: Detected lcore 36 as core 9 on socket 1 00:05:18.611 EAL: Detected lcore 37 as core 10 on socket 1 00:05:18.611 EAL: Detected lcore 38 as core 11 on socket 1 00:05:18.611 EAL: Detected lcore 39 as core 12 on socket 1 00:05:18.611 EAL: Detected lcore 40 as core 13 on socket 1 00:05:18.611 EAL: Detected lcore 41 as core 14 on socket 1 00:05:18.611 EAL: Detected lcore 42 as core 16 on socket 1 00:05:18.611 EAL: Detected lcore 43 as core 17 on socket 1 00:05:18.611 EAL: Detected lcore 44 as core 18 on socket 1 00:05:18.611 EAL: Detected lcore 45 as core 19 on socket 1 00:05:18.611 EAL: Detected lcore 46 as core 20 on socket 1 00:05:18.611 EAL: Detected lcore 47 as core 21 on socket 1 00:05:18.611 EAL: Detected lcore 48 as core 22 on socket 1 00:05:18.611 EAL: Detected lcore 49 as core 24 on socket 1 00:05:18.611 EAL: Detected lcore 50 as core 25 on socket 1 00:05:18.611 EAL: Detected lcore 51 as core 26 on socket 1 00:05:18.611 EAL: Detected lcore 52 as core 27 on socket 1 00:05:18.611 EAL: Detected lcore 53 as core 28 on socket 1 00:05:18.611 EAL: Detected lcore 54 as core 29 on socket 1 00:05:18.611 EAL: Detected lcore 55 as core 30 on socket 1 00:05:18.611 EAL: Detected lcore 56 as core 0 on socket 0 00:05:18.611 EAL: Detected lcore 57 as core 1 on socket 0 00:05:18.611 EAL: Detected lcore 58 as core 2 on socket 0 00:05:18.611 EAL: Detected lcore 59 as core 3 on socket 0 00:05:18.611 EAL: Detected lcore 60 as core 4 on socket 0 00:05:18.611 EAL: Detected lcore 61 as core 5 on socket 0 00:05:18.611 EAL: Detected lcore 62 as core 6 on socket 0 00:05:18.611 EAL: Detected lcore 63 as core 8 on socket 0 00:05:18.611 EAL: Detected lcore 64 as core 9 on socket 0 00:05:18.611 EAL: Detected lcore 65 as core 10 on socket 0 00:05:18.611 EAL: Detected lcore 66 as core 11 on socket 0 00:05:18.611 EAL: Detected lcore 67 as core 12 on socket 0 00:05:18.611 EAL: Detected lcore 68 as core 13 on socket 0 00:05:18.611 EAL: Detected lcore 69 as core 14 on socket 0 00:05:18.611 EAL: Detected lcore 70 as core 16 on socket 0 00:05:18.611 EAL: Detected lcore 71 as core 17 on socket 0 00:05:18.611 EAL: Detected lcore 72 as core 18 on socket 0 00:05:18.611 EAL: Detected lcore 73 as core 19 on socket 0 00:05:18.611 EAL: Detected lcore 74 as core 20 on socket 0 00:05:18.611 EAL: Detected lcore 75 as core 21 on socket 0 00:05:18.611 EAL: Detected lcore 76 as core 22 on socket 0 00:05:18.611 EAL: Detected lcore 77 as core 24 on socket 0 00:05:18.611 EAL: Detected lcore 78 as core 25 on socket 0 00:05:18.611 EAL: Detected lcore 79 as core 26 on socket 0 00:05:18.611 EAL: Detected lcore 80 as core 27 on socket 0 00:05:18.611 EAL: Detected lcore 81 as core 28 on socket 0 00:05:18.611 EAL: Detected lcore 82 as core 29 on socket 0 00:05:18.611 EAL: Detected lcore 83 as core 30 on socket 0 00:05:18.611 EAL: Detected lcore 84 as core 0 on socket 1 00:05:18.611 EAL: Detected lcore 85 as core 1 on socket 1 00:05:18.611 EAL: Detected lcore 86 as core 2 on socket 1 00:05:18.611 EAL: Detected lcore 87 as core 3 on socket 1 00:05:18.611 EAL: Detected lcore 88 as core 4 on socket 1 00:05:18.611 EAL: Detected lcore 89 as core 5 on socket 1 00:05:18.611 EAL: Detected lcore 90 as core 6 on socket 1 00:05:18.611 EAL: Detected lcore 91 as core 8 on socket 1 00:05:18.611 EAL: Detected lcore 92 as core 9 on socket 1 00:05:18.611 EAL: Detected lcore 93 as core 10 on socket 1 00:05:18.611 EAL: Detected lcore 94 as core 11 on socket 1 00:05:18.611 EAL: Detected lcore 95 as core 12 on socket 1 00:05:18.611 EAL: Detected lcore 96 as core 13 on socket 1 00:05:18.611 EAL: Detected lcore 97 as core 14 on socket 1 00:05:18.611 EAL: Detected lcore 98 as core 16 on socket 1 00:05:18.611 EAL: Detected lcore 99 as core 17 on socket 1 00:05:18.611 EAL: Detected lcore 100 as core 18 on socket 1 00:05:18.611 EAL: Detected lcore 101 as core 19 on socket 1 00:05:18.611 EAL: Detected lcore 102 as core 20 on socket 1 00:05:18.611 EAL: Detected lcore 103 as core 21 on socket 1 00:05:18.611 EAL: Detected lcore 104 as core 22 on socket 1 00:05:18.611 EAL: Detected lcore 105 as core 24 on socket 1 00:05:18.611 EAL: Detected lcore 106 as core 25 on socket 1 00:05:18.611 EAL: Detected lcore 107 as core 26 on socket 1 00:05:18.611 EAL: Detected lcore 108 as core 27 on socket 1 00:05:18.611 EAL: Detected lcore 109 as core 28 on socket 1 00:05:18.611 EAL: Detected lcore 110 as core 29 on socket 1 00:05:18.611 EAL: Detected lcore 111 as core 30 on socket 1 00:05:18.611 EAL: Maximum logical cores by configuration: 128 00:05:18.611 EAL: Detected CPU lcores: 112 00:05:18.611 EAL: Detected NUMA nodes: 2 00:05:18.611 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:18.611 EAL: Detected shared linkage of DPDK 00:05:18.611 EAL: No shared files mode enabled, IPC will be disabled 00:05:18.611 EAL: Bus pci wants IOVA as 'DC' 00:05:18.611 EAL: Buses did not request a specific IOVA mode. 00:05:18.611 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:18.611 EAL: Selected IOVA mode 'VA' 00:05:18.611 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.611 EAL: Probing VFIO support... 00:05:18.611 EAL: IOMMU type 1 (Type 1) is supported 00:05:18.611 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:18.611 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:18.611 EAL: VFIO support initialized 00:05:18.611 EAL: Ask a virtual area of 0x2e000 bytes 00:05:18.611 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:18.611 EAL: Setting up physically contiguous memory... 00:05:18.611 EAL: Setting maximum number of open files to 524288 00:05:18.611 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:18.611 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:18.611 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:18.611 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.611 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:18.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.611 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.611 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:18.611 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:18.611 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.611 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:18.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.611 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.611 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:18.611 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:18.611 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.611 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:18.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.611 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.611 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:18.611 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:18.611 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.611 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:18.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.611 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.611 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:18.611 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:18.611 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:18.611 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.611 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:18.611 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.611 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.611 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:18.611 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:18.611 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.611 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:18.611 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.611 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.611 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:18.611 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:18.611 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.611 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:18.611 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.611 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.611 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:18.611 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:18.611 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.611 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:18.611 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.611 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.611 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:18.611 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:18.611 EAL: Hugepages will be freed exactly as allocated. 00:05:18.611 EAL: No shared files mode enabled, IPC is disabled 00:05:18.611 EAL: No shared files mode enabled, IPC is disabled 00:05:18.611 EAL: TSC frequency is ~2200000 KHz 00:05:18.611 EAL: Main lcore 0 is ready (tid=7fc706993a00;cpuset=[0]) 00:05:18.611 EAL: Trying to obtain current memory policy. 00:05:18.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.611 EAL: Restoring previous memory policy: 0 00:05:18.611 EAL: request: mp_malloc_sync 00:05:18.611 EAL: No shared files mode enabled, IPC is disabled 00:05:18.611 EAL: Heap on socket 0 was expanded by 2MB 00:05:18.611 EAL: No shared files mode enabled, IPC is disabled 00:05:18.611 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:18.611 EAL: Mem event callback 'spdk:(nil)' registered 00:05:18.611 00:05:18.611 00:05:18.611 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.611 http://cunit.sourceforge.net/ 00:05:18.611 00:05:18.611 00:05:18.611 Suite: components_suite 00:05:18.611 Test: vtophys_malloc_test ...passed 00:05:18.611 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:18.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.611 EAL: Restoring previous memory policy: 4 00:05:18.611 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.611 EAL: request: mp_malloc_sync 00:05:18.611 EAL: No shared files mode enabled, IPC is disabled 00:05:18.611 EAL: Heap on socket 0 was expanded by 4MB 00:05:18.611 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.611 EAL: request: mp_malloc_sync 00:05:18.611 EAL: No shared files mode enabled, IPC is disabled 00:05:18.611 EAL: Heap on socket 0 was shrunk by 4MB 00:05:18.611 EAL: Trying to obtain current memory policy. 00:05:18.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.611 EAL: Restoring previous memory policy: 4 00:05:18.611 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.611 EAL: request: mp_malloc_sync 00:05:18.611 EAL: No shared files mode enabled, IPC is disabled 00:05:18.611 EAL: Heap on socket 0 was expanded by 6MB 00:05:18.612 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.612 EAL: request: mp_malloc_sync 00:05:18.612 EAL: No shared files mode enabled, IPC is disabled 00:05:18.612 EAL: Heap on socket 0 was shrunk by 6MB 00:05:18.612 EAL: Trying to obtain current memory policy. 00:05:18.612 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.612 EAL: Restoring previous memory policy: 4 00:05:18.612 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.612 EAL: request: mp_malloc_sync 00:05:18.612 EAL: No shared files mode enabled, IPC is disabled 00:05:18.612 EAL: Heap on socket 0 was expanded by 10MB 00:05:18.612 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.612 EAL: request: mp_malloc_sync 00:05:18.612 EAL: No shared files mode enabled, IPC is disabled 00:05:18.612 EAL: Heap on socket 0 was shrunk by 10MB 00:05:18.612 EAL: Trying to obtain current memory policy. 00:05:18.612 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.612 EAL: Restoring previous memory policy: 4 00:05:18.612 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.612 EAL: request: mp_malloc_sync 00:05:18.612 EAL: No shared files mode enabled, IPC is disabled 00:05:18.612 EAL: Heap on socket 0 was expanded by 18MB 00:05:18.612 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.612 EAL: request: mp_malloc_sync 00:05:18.612 EAL: No shared files mode enabled, IPC is disabled 00:05:18.612 EAL: Heap on socket 0 was shrunk by 18MB 00:05:18.612 EAL: Trying to obtain current memory policy. 00:05:18.612 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.612 EAL: Restoring previous memory policy: 4 00:05:18.612 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.612 EAL: request: mp_malloc_sync 00:05:18.612 EAL: No shared files mode enabled, IPC is disabled 00:05:18.612 EAL: Heap on socket 0 was expanded by 34MB 00:05:18.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.871 EAL: request: mp_malloc_sync 00:05:18.871 EAL: No shared files mode enabled, IPC is disabled 00:05:18.871 EAL: Heap on socket 0 was shrunk by 34MB 00:05:18.871 EAL: Trying to obtain current memory policy. 00:05:18.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.871 EAL: Restoring previous memory policy: 4 00:05:18.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.871 EAL: request: mp_malloc_sync 00:05:18.871 EAL: No shared files mode enabled, IPC is disabled 00:05:18.871 EAL: Heap on socket 0 was expanded by 66MB 00:05:18.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.871 EAL: request: mp_malloc_sync 00:05:18.871 EAL: No shared files mode enabled, IPC is disabled 00:05:18.871 EAL: Heap on socket 0 was shrunk by 66MB 00:05:18.871 EAL: Trying to obtain current memory policy. 00:05:18.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.871 EAL: Restoring previous memory policy: 4 00:05:18.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.871 EAL: request: mp_malloc_sync 00:05:18.871 EAL: No shared files mode enabled, IPC is disabled 00:05:18.871 EAL: Heap on socket 0 was expanded by 130MB 00:05:18.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.871 EAL: request: mp_malloc_sync 00:05:18.871 EAL: No shared files mode enabled, IPC is disabled 00:05:18.871 EAL: Heap on socket 0 was shrunk by 130MB 00:05:18.871 EAL: Trying to obtain current memory policy. 00:05:18.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.871 EAL: Restoring previous memory policy: 4 00:05:18.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.871 EAL: request: mp_malloc_sync 00:05:18.871 EAL: No shared files mode enabled, IPC is disabled 00:05:18.871 EAL: Heap on socket 0 was expanded by 258MB 00:05:18.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.871 EAL: request: mp_malloc_sync 00:05:18.871 EAL: No shared files mode enabled, IPC is disabled 00:05:18.871 EAL: Heap on socket 0 was shrunk by 258MB 00:05:18.871 EAL: Trying to obtain current memory policy. 00:05:18.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.130 EAL: Restoring previous memory policy: 4 00:05:19.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.130 EAL: request: mp_malloc_sync 00:05:19.130 EAL: No shared files mode enabled, IPC is disabled 00:05:19.130 EAL: Heap on socket 0 was expanded by 514MB 00:05:19.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.130 EAL: request: mp_malloc_sync 00:05:19.130 EAL: No shared files mode enabled, IPC is disabled 00:05:19.130 EAL: Heap on socket 0 was shrunk by 514MB 00:05:19.130 EAL: Trying to obtain current memory policy. 00:05:19.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.388 EAL: Restoring previous memory policy: 4 00:05:19.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.388 EAL: request: mp_malloc_sync 00:05:19.388 EAL: No shared files mode enabled, IPC is disabled 00:05:19.388 EAL: Heap on socket 0 was expanded by 1026MB 00:05:19.647 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.906 EAL: request: mp_malloc_sync 00:05:19.906 EAL: No shared files mode enabled, IPC is disabled 00:05:19.906 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:19.906 passed 00:05:19.906 00:05:19.906 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.906 suites 1 1 n/a 0 0 00:05:19.906 tests 2 2 2 0 0 00:05:19.906 asserts 497 497 497 0 n/a 00:05:19.906 00:05:19.906 Elapsed time = 1.020 seconds 00:05:19.906 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.906 EAL: request: mp_malloc_sync 00:05:19.906 EAL: No shared files mode enabled, IPC is disabled 00:05:19.906 EAL: Heap on socket 0 was shrunk by 2MB 00:05:19.906 EAL: No shared files mode enabled, IPC is disabled 00:05:19.906 EAL: No shared files mode enabled, IPC is disabled 00:05:19.906 EAL: No shared files mode enabled, IPC is disabled 00:05:19.906 00:05:19.906 real 0m1.172s 00:05:19.906 user 0m0.667s 00:05:19.906 sys 0m0.470s 00:05:19.906 14:50:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.906 14:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.906 ************************************ 00:05:19.906 END TEST env_vtophys 00:05:19.906 ************************************ 00:05:19.906 14:50:38 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:19.906 14:50:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.906 14:50:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.906 14:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.906 ************************************ 00:05:19.906 START TEST env_pci 00:05:19.906 ************************************ 00:05:19.906 14:50:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:19.906 00:05:19.906 00:05:19.906 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.906 http://cunit.sourceforge.net/ 00:05:19.906 00:05:19.906 00:05:19.906 Suite: pci 00:05:19.906 Test: pci_hook ...[2024-06-11 14:50:38.549950] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3073555 has claimed it 00:05:19.906 EAL: Cannot find device (10000:00:01.0) 00:05:19.906 EAL: Failed to attach device on primary process 00:05:19.906 passed 00:05:19.906 00:05:19.906 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.906 suites 1 1 n/a 0 0 00:05:19.906 tests 1 1 1 0 0 00:05:19.906 asserts 25 25 25 0 n/a 00:05:19.906 00:05:19.906 Elapsed time = 0.032 seconds 00:05:19.906 00:05:19.906 real 0m0.052s 00:05:19.906 user 0m0.015s 00:05:19.906 sys 0m0.037s 00:05:19.906 14:50:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.906 14:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.906 ************************************ 00:05:19.906 END TEST env_pci 00:05:19.906 ************************************ 00:05:19.906 14:50:38 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:19.906 14:50:38 -- env/env.sh@15 -- # uname 00:05:19.906 14:50:38 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:19.906 14:50:38 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:19.906 14:50:38 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.906 14:50:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:19.906 14:50:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.906 14:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.906 ************************************ 00:05:19.906 START TEST env_dpdk_post_init 00:05:19.906 ************************************ 00:05:19.906 14:50:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.906 EAL: Detected CPU lcores: 112 00:05:19.906 EAL: Detected NUMA nodes: 2 00:05:19.906 EAL: Detected shared linkage of DPDK 00:05:19.906 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.906 EAL: Selected IOVA mode 'VA' 00:05:19.906 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.906 EAL: VFIO support initialized 00:05:19.906 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.165 EAL: Using IOMMU type 1 (Type 1) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.165 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:20.165 EAL: Ignore mapping IO port bar(1) 00:05:20.166 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:20.166 EAL: Ignore mapping IO port bar(1) 00:05:20.166 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:21.103 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:05:24.390 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:05:24.390 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:05:24.390 Starting DPDK initialization... 00:05:24.390 Starting SPDK post initialization... 00:05:24.390 SPDK NVMe probe 00:05:24.390 Attaching to 0000:86:00.0 00:05:24.390 Attached to 0000:86:00.0 00:05:24.390 Cleaning up... 00:05:24.390 00:05:24.390 real 0m4.447s 00:05:24.390 user 0m3.340s 00:05:24.391 sys 0m0.159s 00:05:24.391 14:50:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.391 14:50:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.391 ************************************ 00:05:24.391 END TEST env_dpdk_post_init 00:05:24.391 ************************************ 00:05:24.391 14:50:43 -- env/env.sh@26 -- # uname 00:05:24.391 14:50:43 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:24.391 14:50:43 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.391 14:50:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.391 14:50:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.391 14:50:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.391 ************************************ 00:05:24.391 START TEST env_mem_callbacks 00:05:24.391 ************************************ 00:05:24.391 14:50:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.391 EAL: Detected CPU lcores: 112 00:05:24.391 EAL: Detected NUMA nodes: 2 00:05:24.391 EAL: Detected shared linkage of DPDK 00:05:24.391 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.391 EAL: Selected IOVA mode 'VA' 00:05:24.391 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.391 EAL: VFIO support initialized 00:05:24.391 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.391 00:05:24.391 00:05:24.391 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.391 http://cunit.sourceforge.net/ 00:05:24.391 00:05:24.391 00:05:24.391 Suite: memory 00:05:24.391 Test: test ... 00:05:24.391 register 0x200000200000 2097152 00:05:24.391 malloc 3145728 00:05:24.391 register 0x200000400000 4194304 00:05:24.391 buf 0x200000500000 len 3145728 PASSED 00:05:24.391 malloc 64 00:05:24.391 buf 0x2000004fff40 len 64 PASSED 00:05:24.391 malloc 4194304 00:05:24.391 register 0x200000800000 6291456 00:05:24.391 buf 0x200000a00000 len 4194304 PASSED 00:05:24.391 free 0x200000500000 3145728 00:05:24.391 free 0x2000004fff40 64 00:05:24.391 unregister 0x200000400000 4194304 PASSED 00:05:24.391 free 0x200000a00000 4194304 00:05:24.391 unregister 0x200000800000 6291456 PASSED 00:05:24.391 malloc 8388608 00:05:24.391 register 0x200000400000 10485760 00:05:24.391 buf 0x200000600000 len 8388608 PASSED 00:05:24.391 free 0x200000600000 8388608 00:05:24.391 unregister 0x200000400000 10485760 PASSED 00:05:24.391 passed 00:05:24.391 00:05:24.391 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.391 suites 1 1 n/a 0 0 00:05:24.391 tests 1 1 1 0 0 00:05:24.391 asserts 15 15 15 0 n/a 00:05:24.391 00:05:24.391 Elapsed time = 0.007 seconds 00:05:24.391 00:05:24.391 real 0m0.066s 00:05:24.391 user 0m0.025s 00:05:24.391 sys 0m0.041s 00:05:24.391 14:50:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.391 14:50:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.391 ************************************ 00:05:24.391 END TEST env_mem_callbacks 00:05:24.391 ************************************ 00:05:24.391 00:05:24.391 real 0m6.240s 00:05:24.391 user 0m4.365s 00:05:24.391 sys 0m0.928s 00:05:24.391 14:50:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.391 14:50:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.391 ************************************ 00:05:24.391 END TEST env 00:05:24.391 ************************************ 00:05:24.650 14:50:43 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:24.650 14:50:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.650 14:50:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.650 14:50:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.650 ************************************ 00:05:24.650 START TEST rpc 00:05:24.650 ************************************ 00:05:24.650 14:50:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:24.650 * Looking for test storage... 00:05:24.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:24.650 14:50:43 -- rpc/rpc.sh@65 -- # spdk_pid=3074521 00:05:24.650 14:50:43 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.650 14:50:43 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:24.650 14:50:43 -- rpc/rpc.sh@67 -- # waitforlisten 3074521 00:05:24.650 14:50:43 -- common/autotest_common.sh@819 -- # '[' -z 3074521 ']' 00:05:24.650 14:50:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.651 14:50:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.651 14:50:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.651 14:50:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.651 14:50:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.651 [2024-06-11 14:50:43.386516] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:24.651 [2024-06-11 14:50:43.386579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074521 ] 00:05:24.651 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.651 [2024-06-11 14:50:43.475540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.909 [2024-06-11 14:50:43.562600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.909 [2024-06-11 14:50:43.562743] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:24.909 [2024-06-11 14:50:43.562754] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3074521' to capture a snapshot of events at runtime. 00:05:24.909 [2024-06-11 14:50:43.562763] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3074521 for offline analysis/debug. 00:05:24.909 [2024-06-11 14:50:43.562785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.477 14:50:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:25.477 14:50:44 -- common/autotest_common.sh@852 -- # return 0 00:05:25.477 14:50:44 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.477 14:50:44 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.477 14:50:44 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:25.477 14:50:44 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:25.477 14:50:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.477 14:50:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.477 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.477 ************************************ 00:05:25.477 START TEST rpc_integrity 00:05:25.477 ************************************ 00:05:25.736 14:50:44 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:25.736 14:50:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.736 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.736 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.736 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.736 14:50:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.736 14:50:44 -- rpc/rpc.sh@13 -- # jq length 00:05:25.736 14:50:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.736 14:50:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.736 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.736 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.736 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.736 14:50:44 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:25.736 14:50:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.736 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.736 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.736 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.736 14:50:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.736 { 00:05:25.736 "name": "Malloc0", 00:05:25.736 "aliases": [ 00:05:25.736 "cd48e503-3b2f-4f23-b991-4453ddaff1e4" 00:05:25.736 ], 00:05:25.736 "product_name": "Malloc disk", 00:05:25.736 "block_size": 512, 00:05:25.736 "num_blocks": 16384, 00:05:25.736 "uuid": "cd48e503-3b2f-4f23-b991-4453ddaff1e4", 00:05:25.736 "assigned_rate_limits": { 00:05:25.736 "rw_ios_per_sec": 0, 00:05:25.736 "rw_mbytes_per_sec": 0, 00:05:25.736 "r_mbytes_per_sec": 0, 00:05:25.736 "w_mbytes_per_sec": 0 00:05:25.736 }, 00:05:25.736 "claimed": false, 00:05:25.736 "zoned": false, 00:05:25.736 "supported_io_types": { 00:05:25.736 "read": true, 00:05:25.736 "write": true, 00:05:25.736 "unmap": true, 00:05:25.736 "write_zeroes": true, 00:05:25.736 "flush": true, 00:05:25.736 "reset": true, 00:05:25.736 "compare": false, 00:05:25.736 "compare_and_write": false, 00:05:25.736 "abort": true, 00:05:25.736 "nvme_admin": false, 00:05:25.736 "nvme_io": false 00:05:25.737 }, 00:05:25.737 "memory_domains": [ 00:05:25.737 { 00:05:25.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.737 "dma_device_type": 2 00:05:25.737 } 00:05:25.737 ], 00:05:25.737 "driver_specific": {} 00:05:25.737 } 00:05:25.737 ]' 00:05:25.737 14:50:44 -- rpc/rpc.sh@17 -- # jq length 00:05:25.737 14:50:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.737 14:50:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:25.737 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.737 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.737 [2024-06-11 14:50:44.456055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:25.737 [2024-06-11 14:50:44.456095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.737 [2024-06-11 14:50:44.456112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x62aaa0 00:05:25.737 [2024-06-11 14:50:44.456120] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.737 [2024-06-11 14:50:44.457639] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.737 [2024-06-11 14:50:44.457664] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.737 Passthru0 00:05:25.737 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.737 14:50:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.737 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.737 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.737 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.737 14:50:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.737 { 00:05:25.737 "name": "Malloc0", 00:05:25.737 "aliases": [ 00:05:25.737 "cd48e503-3b2f-4f23-b991-4453ddaff1e4" 00:05:25.737 ], 00:05:25.737 "product_name": "Malloc disk", 00:05:25.737 "block_size": 512, 00:05:25.737 "num_blocks": 16384, 00:05:25.737 "uuid": "cd48e503-3b2f-4f23-b991-4453ddaff1e4", 00:05:25.737 "assigned_rate_limits": { 00:05:25.737 "rw_ios_per_sec": 0, 00:05:25.737 "rw_mbytes_per_sec": 0, 00:05:25.737 "r_mbytes_per_sec": 0, 00:05:25.737 "w_mbytes_per_sec": 0 00:05:25.737 }, 00:05:25.737 "claimed": true, 00:05:25.737 "claim_type": "exclusive_write", 00:05:25.737 "zoned": false, 00:05:25.737 "supported_io_types": { 00:05:25.737 "read": true, 00:05:25.737 "write": true, 00:05:25.737 "unmap": true, 00:05:25.737 "write_zeroes": true, 00:05:25.737 "flush": true, 00:05:25.737 "reset": true, 00:05:25.737 "compare": false, 00:05:25.737 "compare_and_write": false, 00:05:25.737 "abort": true, 00:05:25.737 "nvme_admin": false, 00:05:25.737 "nvme_io": false 00:05:25.737 }, 00:05:25.737 "memory_domains": [ 00:05:25.737 { 00:05:25.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.737 "dma_device_type": 2 00:05:25.737 } 00:05:25.737 ], 00:05:25.737 "driver_specific": {} 00:05:25.737 }, 00:05:25.737 { 00:05:25.737 "name": "Passthru0", 00:05:25.737 "aliases": [ 00:05:25.737 "cf453095-c951-53ca-bc90-b2db60d59d1e" 00:05:25.737 ], 00:05:25.737 "product_name": "passthru", 00:05:25.737 "block_size": 512, 00:05:25.737 "num_blocks": 16384, 00:05:25.737 "uuid": "cf453095-c951-53ca-bc90-b2db60d59d1e", 00:05:25.737 "assigned_rate_limits": { 00:05:25.737 "rw_ios_per_sec": 0, 00:05:25.737 "rw_mbytes_per_sec": 0, 00:05:25.737 "r_mbytes_per_sec": 0, 00:05:25.737 "w_mbytes_per_sec": 0 00:05:25.737 }, 00:05:25.737 "claimed": false, 00:05:25.737 "zoned": false, 00:05:25.737 "supported_io_types": { 00:05:25.737 "read": true, 00:05:25.737 "write": true, 00:05:25.737 "unmap": true, 00:05:25.737 "write_zeroes": true, 00:05:25.737 "flush": true, 00:05:25.737 "reset": true, 00:05:25.737 "compare": false, 00:05:25.737 "compare_and_write": false, 00:05:25.737 "abort": true, 00:05:25.737 "nvme_admin": false, 00:05:25.737 "nvme_io": false 00:05:25.737 }, 00:05:25.737 "memory_domains": [ 00:05:25.737 { 00:05:25.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.737 "dma_device_type": 2 00:05:25.737 } 00:05:25.737 ], 00:05:25.737 "driver_specific": { 00:05:25.737 "passthru": { 00:05:25.737 "name": "Passthru0", 00:05:25.737 "base_bdev_name": "Malloc0" 00:05:25.737 } 00:05:25.737 } 00:05:25.737 } 00:05:25.737 ]' 00:05:25.737 14:50:44 -- rpc/rpc.sh@21 -- # jq length 00:05:25.737 14:50:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.737 14:50:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.737 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.737 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.737 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.737 14:50:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:25.737 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.737 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.737 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.737 14:50:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.737 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.737 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.737 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.737 14:50:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.737 14:50:44 -- rpc/rpc.sh@26 -- # jq length 00:05:25.997 14:50:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.997 00:05:25.997 real 0m0.285s 00:05:25.997 user 0m0.181s 00:05:25.997 sys 0m0.040s 00:05:25.997 14:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.997 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.997 ************************************ 00:05:25.997 END TEST rpc_integrity 00:05:25.997 ************************************ 00:05:25.997 14:50:44 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:25.997 14:50:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.997 14:50:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.997 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.997 ************************************ 00:05:25.997 START TEST rpc_plugins 00:05:25.997 ************************************ 00:05:25.997 14:50:44 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:25.997 14:50:44 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:25.997 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.997 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.997 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.997 14:50:44 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:25.997 14:50:44 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:25.997 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.997 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.997 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.997 14:50:44 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:25.997 { 00:05:25.997 "name": "Malloc1", 00:05:25.997 "aliases": [ 00:05:25.997 "289e316c-4eb2-4c1f-b0eb-c8cba52016ea" 00:05:25.997 ], 00:05:25.997 "product_name": "Malloc disk", 00:05:25.997 "block_size": 4096, 00:05:25.997 "num_blocks": 256, 00:05:25.997 "uuid": "289e316c-4eb2-4c1f-b0eb-c8cba52016ea", 00:05:25.997 "assigned_rate_limits": { 00:05:25.997 "rw_ios_per_sec": 0, 00:05:25.997 "rw_mbytes_per_sec": 0, 00:05:25.997 "r_mbytes_per_sec": 0, 00:05:25.997 "w_mbytes_per_sec": 0 00:05:25.997 }, 00:05:25.997 "claimed": false, 00:05:25.997 "zoned": false, 00:05:25.997 "supported_io_types": { 00:05:25.997 "read": true, 00:05:25.997 "write": true, 00:05:25.997 "unmap": true, 00:05:25.997 "write_zeroes": true, 00:05:25.997 "flush": true, 00:05:25.997 "reset": true, 00:05:25.997 "compare": false, 00:05:25.997 "compare_and_write": false, 00:05:25.997 "abort": true, 00:05:25.997 "nvme_admin": false, 00:05:25.997 "nvme_io": false 00:05:25.997 }, 00:05:25.997 "memory_domains": [ 00:05:25.997 { 00:05:25.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.997 "dma_device_type": 2 00:05:25.997 } 00:05:25.997 ], 00:05:25.997 "driver_specific": {} 00:05:25.997 } 00:05:25.997 ]' 00:05:25.997 14:50:44 -- rpc/rpc.sh@32 -- # jq length 00:05:25.997 14:50:44 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:25.997 14:50:44 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:25.997 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.997 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.997 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.997 14:50:44 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:25.997 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.997 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.997 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.997 14:50:44 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:25.997 14:50:44 -- rpc/rpc.sh@36 -- # jq length 00:05:25.997 14:50:44 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:25.997 00:05:25.997 real 0m0.143s 00:05:25.997 user 0m0.093s 00:05:25.997 sys 0m0.016s 00:05:25.997 14:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.997 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.997 ************************************ 00:05:25.997 END TEST rpc_plugins 00:05:25.997 ************************************ 00:05:25.997 14:50:44 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:25.997 14:50:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.997 14:50:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.997 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.997 ************************************ 00:05:25.997 START TEST rpc_trace_cmd_test 00:05:25.997 ************************************ 00:05:25.997 14:50:44 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:25.997 14:50:44 -- rpc/rpc.sh@40 -- # local info 00:05:25.997 14:50:44 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:25.997 14:50:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.997 14:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:26.256 14:50:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.256 14:50:44 -- rpc/rpc.sh@42 -- # info='{ 00:05:26.256 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3074521", 00:05:26.256 "tpoint_group_mask": "0x8", 00:05:26.256 "iscsi_conn": { 00:05:26.256 "mask": "0x2", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "scsi": { 00:05:26.256 "mask": "0x4", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "bdev": { 00:05:26.256 "mask": "0x8", 00:05:26.256 "tpoint_mask": "0xffffffffffffffff" 00:05:26.256 }, 00:05:26.256 "nvmf_rdma": { 00:05:26.256 "mask": "0x10", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "nvmf_tcp": { 00:05:26.256 "mask": "0x20", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "ftl": { 00:05:26.256 "mask": "0x40", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "blobfs": { 00:05:26.256 "mask": "0x80", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "dsa": { 00:05:26.256 "mask": "0x200", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "thread": { 00:05:26.256 "mask": "0x400", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "nvme_pcie": { 00:05:26.256 "mask": "0x800", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "iaa": { 00:05:26.256 "mask": "0x1000", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "nvme_tcp": { 00:05:26.256 "mask": "0x2000", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 }, 00:05:26.256 "bdev_nvme": { 00:05:26.256 "mask": "0x4000", 00:05:26.256 "tpoint_mask": "0x0" 00:05:26.256 } 00:05:26.256 }' 00:05:26.256 14:50:44 -- rpc/rpc.sh@43 -- # jq length 00:05:26.256 14:50:44 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:26.256 14:50:44 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.256 14:50:44 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.256 14:50:44 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.256 14:50:44 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.256 14:50:44 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.256 14:50:45 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.256 14:50:45 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.256 14:50:45 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.256 00:05:26.257 real 0m0.250s 00:05:26.257 user 0m0.219s 00:05:26.257 sys 0m0.023s 00:05:26.257 14:50:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.257 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.257 ************************************ 00:05:26.257 END TEST rpc_trace_cmd_test 00:05:26.257 ************************************ 00:05:26.515 14:50:45 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.515 14:50:45 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.515 14:50:45 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.515 14:50:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.515 14:50:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.515 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.515 ************************************ 00:05:26.515 START TEST rpc_daemon_integrity 00:05:26.515 ************************************ 00:05:26.515 14:50:45 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:26.516 14:50:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.516 14:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.516 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 14:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.516 14:50:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.516 14:50:45 -- rpc/rpc.sh@13 -- # jq length 00:05:26.516 14:50:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.516 14:50:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.516 14:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.516 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 14:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.516 14:50:45 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.516 14:50:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.516 14:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.516 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 14:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.516 14:50:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.516 { 00:05:26.516 "name": "Malloc2", 00:05:26.516 "aliases": [ 00:05:26.516 "0f14c0ae-53e6-4032-bb7d-7016480496c9" 00:05:26.516 ], 00:05:26.516 "product_name": "Malloc disk", 00:05:26.516 "block_size": 512, 00:05:26.516 "num_blocks": 16384, 00:05:26.516 "uuid": "0f14c0ae-53e6-4032-bb7d-7016480496c9", 00:05:26.516 "assigned_rate_limits": { 00:05:26.516 "rw_ios_per_sec": 0, 00:05:26.516 "rw_mbytes_per_sec": 0, 00:05:26.516 "r_mbytes_per_sec": 0, 00:05:26.516 "w_mbytes_per_sec": 0 00:05:26.516 }, 00:05:26.516 "claimed": false, 00:05:26.516 "zoned": false, 00:05:26.516 "supported_io_types": { 00:05:26.516 "read": true, 00:05:26.516 "write": true, 00:05:26.516 "unmap": true, 00:05:26.516 "write_zeroes": true, 00:05:26.516 "flush": true, 00:05:26.516 "reset": true, 00:05:26.516 "compare": false, 00:05:26.516 "compare_and_write": false, 00:05:26.516 "abort": true, 00:05:26.516 "nvme_admin": false, 00:05:26.516 "nvme_io": false 00:05:26.516 }, 00:05:26.516 "memory_domains": [ 00:05:26.516 { 00:05:26.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.516 "dma_device_type": 2 00:05:26.516 } 00:05:26.516 ], 00:05:26.516 "driver_specific": {} 00:05:26.516 } 00:05:26.516 ]' 00:05:26.516 14:50:45 -- rpc/rpc.sh@17 -- # jq length 00:05:26.516 14:50:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.516 14:50:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:26.516 14:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.516 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 [2024-06-11 14:50:45.250328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:26.516 [2024-06-11 14:50:45.250362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.516 [2024-06-11 14:50:45.250377] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7ca5d0 00:05:26.516 [2024-06-11 14:50:45.250387] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.516 [2024-06-11 14:50:45.251742] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.516 [2024-06-11 14:50:45.251765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.516 Passthru0 00:05:26.516 14:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.516 14:50:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.516 14:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.516 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 14:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.516 14:50:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.516 { 00:05:26.516 "name": "Malloc2", 00:05:26.516 "aliases": [ 00:05:26.516 "0f14c0ae-53e6-4032-bb7d-7016480496c9" 00:05:26.516 ], 00:05:26.516 "product_name": "Malloc disk", 00:05:26.516 "block_size": 512, 00:05:26.516 "num_blocks": 16384, 00:05:26.516 "uuid": "0f14c0ae-53e6-4032-bb7d-7016480496c9", 00:05:26.516 "assigned_rate_limits": { 00:05:26.516 "rw_ios_per_sec": 0, 00:05:26.516 "rw_mbytes_per_sec": 0, 00:05:26.516 "r_mbytes_per_sec": 0, 00:05:26.516 "w_mbytes_per_sec": 0 00:05:26.516 }, 00:05:26.516 "claimed": true, 00:05:26.516 "claim_type": "exclusive_write", 00:05:26.516 "zoned": false, 00:05:26.516 "supported_io_types": { 00:05:26.516 "read": true, 00:05:26.516 "write": true, 00:05:26.516 "unmap": true, 00:05:26.516 "write_zeroes": true, 00:05:26.516 "flush": true, 00:05:26.516 "reset": true, 00:05:26.516 "compare": false, 00:05:26.516 "compare_and_write": false, 00:05:26.516 "abort": true, 00:05:26.516 "nvme_admin": false, 00:05:26.516 "nvme_io": false 00:05:26.516 }, 00:05:26.516 "memory_domains": [ 00:05:26.516 { 00:05:26.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.516 "dma_device_type": 2 00:05:26.516 } 00:05:26.516 ], 00:05:26.516 "driver_specific": {} 00:05:26.516 }, 00:05:26.516 { 00:05:26.516 "name": "Passthru0", 00:05:26.516 "aliases": [ 00:05:26.516 "f005c34e-1b62-5592-88e9-5ae3f6550cda" 00:05:26.516 ], 00:05:26.516 "product_name": "passthru", 00:05:26.516 "block_size": 512, 00:05:26.516 "num_blocks": 16384, 00:05:26.516 "uuid": "f005c34e-1b62-5592-88e9-5ae3f6550cda", 00:05:26.516 "assigned_rate_limits": { 00:05:26.516 "rw_ios_per_sec": 0, 00:05:26.516 "rw_mbytes_per_sec": 0, 00:05:26.516 "r_mbytes_per_sec": 0, 00:05:26.516 "w_mbytes_per_sec": 0 00:05:26.516 }, 00:05:26.516 "claimed": false, 00:05:26.516 "zoned": false, 00:05:26.516 "supported_io_types": { 00:05:26.516 "read": true, 00:05:26.516 "write": true, 00:05:26.516 "unmap": true, 00:05:26.516 "write_zeroes": true, 00:05:26.516 "flush": true, 00:05:26.516 "reset": true, 00:05:26.516 "compare": false, 00:05:26.516 "compare_and_write": false, 00:05:26.516 "abort": true, 00:05:26.516 "nvme_admin": false, 00:05:26.516 "nvme_io": false 00:05:26.516 }, 00:05:26.516 "memory_domains": [ 00:05:26.516 { 00:05:26.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.516 "dma_device_type": 2 00:05:26.516 } 00:05:26.516 ], 00:05:26.516 "driver_specific": { 00:05:26.516 "passthru": { 00:05:26.516 "name": "Passthru0", 00:05:26.516 "base_bdev_name": "Malloc2" 00:05:26.516 } 00:05:26.516 } 00:05:26.516 } 00:05:26.516 ]' 00:05:26.516 14:50:45 -- rpc/rpc.sh@21 -- # jq length 00:05:26.516 14:50:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.516 14:50:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.516 14:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.516 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 14:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.516 14:50:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:26.516 14:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.516 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 14:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.516 14:50:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.516 14:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.516 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 14:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.516 14:50:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.776 14:50:45 -- rpc/rpc.sh@26 -- # jq length 00:05:26.776 14:50:45 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.776 00:05:26.776 real 0m0.289s 00:05:26.776 user 0m0.186s 00:05:26.776 sys 0m0.038s 00:05:26.776 14:50:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.776 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 ************************************ 00:05:26.776 END TEST rpc_daemon_integrity 00:05:26.776 ************************************ 00:05:26.776 14:50:45 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:26.776 14:50:45 -- rpc/rpc.sh@84 -- # killprocess 3074521 00:05:26.776 14:50:45 -- common/autotest_common.sh@926 -- # '[' -z 3074521 ']' 00:05:26.776 14:50:45 -- common/autotest_common.sh@930 -- # kill -0 3074521 00:05:26.776 14:50:45 -- common/autotest_common.sh@931 -- # uname 00:05:26.776 14:50:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:26.776 14:50:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3074521 00:05:26.776 14:50:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:26.776 14:50:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:26.776 14:50:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3074521' 00:05:26.776 killing process with pid 3074521 00:05:26.776 14:50:45 -- common/autotest_common.sh@945 -- # kill 3074521 00:05:26.776 14:50:45 -- common/autotest_common.sh@950 -- # wait 3074521 00:05:27.035 00:05:27.035 real 0m2.580s 00:05:27.035 user 0m3.387s 00:05:27.035 sys 0m0.667s 00:05:27.035 14:50:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.035 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.035 ************************************ 00:05:27.035 END TEST rpc 00:05:27.035 ************************************ 00:05:27.035 14:50:45 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.035 14:50:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.035 14:50:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.035 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.295 ************************************ 00:05:27.295 START TEST rpc_client 00:05:27.295 ************************************ 00:05:27.295 14:50:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.295 * Looking for test storage... 00:05:27.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:27.295 14:50:45 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:27.295 OK 00:05:27.295 14:50:45 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.295 00:05:27.295 real 0m0.094s 00:05:27.295 user 0m0.041s 00:05:27.295 sys 0m0.060s 00:05:27.295 14:50:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.295 14:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.295 ************************************ 00:05:27.295 END TEST rpc_client 00:05:27.295 ************************************ 00:05:27.295 14:50:46 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.295 14:50:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.295 14:50:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.295 14:50:46 -- common/autotest_common.sh@10 -- # set +x 00:05:27.295 ************************************ 00:05:27.295 START TEST json_config 00:05:27.295 ************************************ 00:05:27.295 14:50:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.295 14:50:46 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.295 14:50:46 -- nvmf/common.sh@7 -- # uname -s 00:05:27.295 14:50:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.295 14:50:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.295 14:50:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.295 14:50:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.295 14:50:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.295 14:50:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.295 14:50:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.295 14:50:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.295 14:50:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.295 14:50:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.295 14:50:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:27.295 14:50:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:27.295 14:50:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.295 14:50:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.295 14:50:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.295 14:50:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.295 14:50:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.295 14:50:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.295 14:50:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.295 14:50:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.295 14:50:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.295 14:50:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.295 14:50:46 -- paths/export.sh@5 -- # export PATH 00:05:27.295 14:50:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.295 14:50:46 -- nvmf/common.sh@46 -- # : 0 00:05:27.295 14:50:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:27.295 14:50:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:27.295 14:50:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:27.295 14:50:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.295 14:50:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.295 14:50:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:27.295 14:50:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:27.295 14:50:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:27.295 14:50:46 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:27.295 14:50:46 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:27.295 14:50:46 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:27.295 14:50:46 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:27.295 14:50:46 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:27.295 14:50:46 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:27.295 14:50:46 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:27.295 14:50:46 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:27.295 14:50:46 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:27.295 14:50:46 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:27.295 14:50:46 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:27.295 14:50:46 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:27.295 14:50:46 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:27.295 14:50:46 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.295 14:50:46 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:27.295 INFO: JSON configuration test init 00:05:27.295 14:50:46 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:27.295 14:50:46 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:27.295 14:50:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.295 14:50:46 -- common/autotest_common.sh@10 -- # set +x 00:05:27.295 14:50:46 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:27.295 14:50:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.295 14:50:46 -- common/autotest_common.sh@10 -- # set +x 00:05:27.295 14:50:46 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:27.295 14:50:46 -- json_config/json_config.sh@98 -- # local app=target 00:05:27.295 14:50:46 -- json_config/json_config.sh@99 -- # shift 00:05:27.295 14:50:46 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:27.295 14:50:46 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:27.295 14:50:46 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:27.295 14:50:46 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:27.295 14:50:46 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:27.295 14:50:46 -- json_config/json_config.sh@111 -- # app_pid[$app]=3075251 00:05:27.295 14:50:46 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:27.295 Waiting for target to run... 00:05:27.295 14:50:46 -- json_config/json_config.sh@114 -- # waitforlisten 3075251 /var/tmp/spdk_tgt.sock 00:05:27.295 14:50:46 -- common/autotest_common.sh@819 -- # '[' -z 3075251 ']' 00:05:27.295 14:50:46 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:27.295 14:50:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.295 14:50:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:27.295 14:50:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.295 14:50:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:27.295 14:50:46 -- common/autotest_common.sh@10 -- # set +x 00:05:27.555 [2024-06-11 14:50:46.168920] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:27.555 [2024-06-11 14:50:46.168985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075251 ] 00:05:27.555 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.813 [2024-06-11 14:50:46.627299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.073 [2024-06-11 14:50:46.722397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.073 [2024-06-11 14:50:46.722540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.332 14:50:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.332 14:50:47 -- common/autotest_common.sh@852 -- # return 0 00:05:28.332 14:50:47 -- json_config/json_config.sh@115 -- # echo '' 00:05:28.332 00:05:28.332 14:50:47 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:28.332 14:50:47 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:28.332 14:50:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:28.332 14:50:47 -- common/autotest_common.sh@10 -- # set +x 00:05:28.332 14:50:47 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:28.332 14:50:47 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:28.332 14:50:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:28.332 14:50:47 -- common/autotest_common.sh@10 -- # set +x 00:05:28.332 14:50:47 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:28.332 14:50:47 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:28.332 14:50:47 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:31.623 14:50:50 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:31.623 14:50:50 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:31.623 14:50:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:31.623 14:50:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.623 14:50:50 -- json_config/json_config.sh@48 -- # local ret=0 00:05:31.623 14:50:50 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:31.623 14:50:50 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:31.623 14:50:50 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:31.623 14:50:50 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:31.623 14:50:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:31.883 14:50:50 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:31.883 14:50:50 -- json_config/json_config.sh@51 -- # local get_types 00:05:31.883 14:50:50 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:31.883 14:50:50 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:31.883 14:50:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:31.883 14:50:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.883 14:50:50 -- json_config/json_config.sh@58 -- # return 0 00:05:31.883 14:50:50 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:31.883 14:50:50 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:31.883 14:50:50 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:31.883 14:50:50 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:31.883 14:50:50 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:31.883 14:50:50 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:31.883 14:50:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:31.883 14:50:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.883 14:50:50 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:31.883 14:50:50 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:31.883 14:50:50 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:31.883 14:50:50 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.883 14:50:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.142 MallocForNvmf0 00:05:32.142 14:50:50 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.142 14:50:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.142 MallocForNvmf1 00:05:32.401 14:50:50 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.401 14:50:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.401 [2024-06-11 14:50:51.115214] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.401 14:50:51 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.401 14:50:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.660 14:50:51 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.660 14:50:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.919 14:50:51 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.919 14:50:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.919 14:50:51 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:32.919 14:50:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.178 [2024-06-11 14:50:51.817547] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.178 14:50:51 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:33.178 14:50:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:33.178 14:50:51 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 14:50:51 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:33.178 14:50:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:33.178 14:50:51 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 14:50:51 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:33.178 14:50:51 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.178 14:50:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.452 MallocBdevForConfigChangeCheck 00:05:33.452 14:50:52 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:33.452 14:50:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:33.452 14:50:52 -- common/autotest_common.sh@10 -- # set +x 00:05:33.452 14:50:52 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:33.452 14:50:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.713 14:50:52 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:33.713 INFO: shutting down applications... 00:05:33.713 14:50:52 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:33.713 14:50:52 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:33.713 14:50:52 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:33.713 14:50:52 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.619 Calling clear_iscsi_subsystem 00:05:35.619 Calling clear_nvmf_subsystem 00:05:35.619 Calling clear_nbd_subsystem 00:05:35.619 Calling clear_ublk_subsystem 00:05:35.619 Calling clear_vhost_blk_subsystem 00:05:35.619 Calling clear_vhost_scsi_subsystem 00:05:35.619 Calling clear_scheduler_subsystem 00:05:35.619 Calling clear_bdev_subsystem 00:05:35.619 Calling clear_accel_subsystem 00:05:35.619 Calling clear_vmd_subsystem 00:05:35.619 Calling clear_sock_subsystem 00:05:35.619 Calling clear_iobuf_subsystem 00:05:35.619 14:50:54 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.619 14:50:54 -- json_config/json_config.sh@396 -- # count=100 00:05:35.619 14:50:54 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:35.619 14:50:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.619 14:50:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.619 14:50:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:35.878 14:50:54 -- json_config/json_config.sh@398 -- # break 00:05:35.878 14:50:54 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:35.878 14:50:54 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:35.878 14:50:54 -- json_config/json_config.sh@120 -- # local app=target 00:05:35.878 14:50:54 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:35.878 14:50:54 -- json_config/json_config.sh@124 -- # [[ -n 3075251 ]] 00:05:35.879 14:50:54 -- json_config/json_config.sh@127 -- # kill -SIGINT 3075251 00:05:35.879 14:50:54 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:35.879 14:50:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:35.879 14:50:54 -- json_config/json_config.sh@130 -- # kill -0 3075251 00:05:35.879 14:50:54 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:36.447 14:50:55 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:36.447 14:50:55 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:36.447 14:50:55 -- json_config/json_config.sh@130 -- # kill -0 3075251 00:05:36.447 14:50:55 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:36.447 14:50:55 -- json_config/json_config.sh@132 -- # break 00:05:36.447 14:50:55 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:36.447 14:50:55 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:36.447 SPDK target shutdown done 00:05:36.447 14:50:55 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:36.447 INFO: relaunching applications... 00:05:36.447 14:50:55 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.447 14:50:55 -- json_config/json_config.sh@98 -- # local app=target 00:05:36.447 14:50:55 -- json_config/json_config.sh@99 -- # shift 00:05:36.447 14:50:55 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:36.447 14:50:55 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:36.447 14:50:55 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:36.447 14:50:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:36.447 14:50:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:36.447 14:50:55 -- json_config/json_config.sh@111 -- # app_pid[$app]=3076979 00:05:36.447 14:50:55 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:36.447 Waiting for target to run... 00:05:36.447 14:50:55 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.447 14:50:55 -- json_config/json_config.sh@114 -- # waitforlisten 3076979 /var/tmp/spdk_tgt.sock 00:05:36.447 14:50:55 -- common/autotest_common.sh@819 -- # '[' -z 3076979 ']' 00:05:36.447 14:50:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.447 14:50:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.447 14:50:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.447 14:50:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.447 14:50:55 -- common/autotest_common.sh@10 -- # set +x 00:05:36.447 [2024-06-11 14:50:55.073710] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:36.447 [2024-06-11 14:50:55.073775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076979 ] 00:05:36.447 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.706 [2024-06-11 14:50:55.389219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.706 [2024-06-11 14:50:55.466018] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.706 [2024-06-11 14:50:55.466164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.996 [2024-06-11 14:50:58.511288] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.996 [2024-06-11 14:50:58.543644] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:40.256 14:50:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.256 14:50:58 -- common/autotest_common.sh@852 -- # return 0 00:05:40.256 14:50:58 -- json_config/json_config.sh@115 -- # echo '' 00:05:40.256 00:05:40.256 14:50:58 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:40.256 14:50:58 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:40.256 INFO: Checking if target configuration is the same... 00:05:40.256 14:50:58 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.256 14:50:58 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:40.256 14:50:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.256 + '[' 2 -ne 2 ']' 00:05:40.256 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:40.256 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:40.256 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:40.256 +++ basename /dev/fd/62 00:05:40.256 ++ mktemp /tmp/62.XXX 00:05:40.256 + tmp_file_1=/tmp/62.uQN 00:05:40.256 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.256 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.256 + tmp_file_2=/tmp/spdk_tgt_config.json.5GH 00:05:40.256 + ret=0 00:05:40.256 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.515 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.515 + diff -u /tmp/62.uQN /tmp/spdk_tgt_config.json.5GH 00:05:40.515 + echo 'INFO: JSON config files are the same' 00:05:40.515 INFO: JSON config files are the same 00:05:40.515 + rm /tmp/62.uQN /tmp/spdk_tgt_config.json.5GH 00:05:40.515 + exit 0 00:05:40.515 14:50:59 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:40.515 14:50:59 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:40.515 INFO: changing configuration and checking if this can be detected... 00:05:40.515 14:50:59 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.515 14:50:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.774 14:50:59 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.774 14:50:59 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:40.774 14:50:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.774 + '[' 2 -ne 2 ']' 00:05:40.774 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:40.774 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:40.774 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:40.774 +++ basename /dev/fd/62 00:05:40.774 ++ mktemp /tmp/62.XXX 00:05:40.774 + tmp_file_1=/tmp/62.HoI 00:05:40.774 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.774 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.774 + tmp_file_2=/tmp/spdk_tgt_config.json.f6j 00:05:40.774 + ret=0 00:05:40.774 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.342 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.342 + diff -u /tmp/62.HoI /tmp/spdk_tgt_config.json.f6j 00:05:41.342 + ret=1 00:05:41.342 + echo '=== Start of file: /tmp/62.HoI ===' 00:05:41.342 + cat /tmp/62.HoI 00:05:41.342 + echo '=== End of file: /tmp/62.HoI ===' 00:05:41.342 + echo '' 00:05:41.342 + echo '=== Start of file: /tmp/spdk_tgt_config.json.f6j ===' 00:05:41.342 + cat /tmp/spdk_tgt_config.json.f6j 00:05:41.342 + echo '=== End of file: /tmp/spdk_tgt_config.json.f6j ===' 00:05:41.342 + echo '' 00:05:41.342 + rm /tmp/62.HoI /tmp/spdk_tgt_config.json.f6j 00:05:41.342 + exit 1 00:05:41.342 14:51:00 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:41.342 INFO: configuration change detected. 00:05:41.342 14:51:00 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:41.342 14:51:00 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:41.342 14:51:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:41.342 14:51:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.342 14:51:00 -- json_config/json_config.sh@360 -- # local ret=0 00:05:41.342 14:51:00 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:41.342 14:51:00 -- json_config/json_config.sh@370 -- # [[ -n 3076979 ]] 00:05:41.342 14:51:00 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:41.342 14:51:00 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:41.342 14:51:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:41.342 14:51:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.342 14:51:00 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:41.342 14:51:00 -- json_config/json_config.sh@246 -- # uname -s 00:05:41.342 14:51:00 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:41.342 14:51:00 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:41.342 14:51:00 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:41.342 14:51:00 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:41.342 14:51:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:41.342 14:51:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.342 14:51:00 -- json_config/json_config.sh@376 -- # killprocess 3076979 00:05:41.342 14:51:00 -- common/autotest_common.sh@926 -- # '[' -z 3076979 ']' 00:05:41.342 14:51:00 -- common/autotest_common.sh@930 -- # kill -0 3076979 00:05:41.342 14:51:00 -- common/autotest_common.sh@931 -- # uname 00:05:41.342 14:51:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:41.342 14:51:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3076979 00:05:41.342 14:51:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:41.342 14:51:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:41.342 14:51:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3076979' 00:05:41.342 killing process with pid 3076979 00:05:41.342 14:51:00 -- common/autotest_common.sh@945 -- # kill 3076979 00:05:41.342 14:51:00 -- common/autotest_common.sh@950 -- # wait 3076979 00:05:43.249 14:51:01 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.249 14:51:01 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:43.249 14:51:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:43.249 14:51:01 -- common/autotest_common.sh@10 -- # set +x 00:05:43.249 14:51:01 -- json_config/json_config.sh@381 -- # return 0 00:05:43.249 14:51:01 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:43.249 INFO: Success 00:05:43.249 00:05:43.249 real 0m15.735s 00:05:43.249 user 0m17.758s 00:05:43.249 sys 0m2.089s 00:05:43.249 14:51:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.249 14:51:01 -- common/autotest_common.sh@10 -- # set +x 00:05:43.249 ************************************ 00:05:43.249 END TEST json_config 00:05:43.249 ************************************ 00:05:43.249 14:51:01 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.249 14:51:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.249 14:51:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.249 14:51:01 -- common/autotest_common.sh@10 -- # set +x 00:05:43.249 ************************************ 00:05:43.249 START TEST json_config_extra_key 00:05:43.249 ************************************ 00:05:43.249 14:51:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.249 14:51:01 -- nvmf/common.sh@7 -- # uname -s 00:05:43.249 14:51:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.249 14:51:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.249 14:51:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.249 14:51:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.249 14:51:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.249 14:51:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.249 14:51:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.249 14:51:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.249 14:51:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.249 14:51:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.249 14:51:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:43.249 14:51:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:43.249 14:51:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.249 14:51:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.249 14:51:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.249 14:51:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.249 14:51:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.249 14:51:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.249 14:51:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.249 14:51:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.249 14:51:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.249 14:51:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.249 14:51:01 -- paths/export.sh@5 -- # export PATH 00:05:43.249 14:51:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.249 14:51:01 -- nvmf/common.sh@46 -- # : 0 00:05:43.249 14:51:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:43.249 14:51:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:43.249 14:51:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:43.249 14:51:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.249 14:51:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.249 14:51:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:43.249 14:51:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:43.249 14:51:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:43.249 INFO: launching applications... 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3078532 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:43.249 Waiting for target to run... 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3078532 /var/tmp/spdk_tgt.sock 00:05:43.249 14:51:01 -- common/autotest_common.sh@819 -- # '[' -z 3078532 ']' 00:05:43.249 14:51:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.249 14:51:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.249 14:51:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.249 14:51:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.249 14:51:01 -- common/autotest_common.sh@10 -- # set +x 00:05:43.249 14:51:01 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.250 [2024-06-11 14:51:01.897244] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:43.250 [2024-06-11 14:51:01.897306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3078532 ] 00:05:43.250 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.508 [2024-06-11 14:51:02.214387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.508 [2024-06-11 14:51:02.291173] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.508 [2024-06-11 14:51:02.291305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.075 14:51:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.075 14:51:02 -- common/autotest_common.sh@852 -- # return 0 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:44.075 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:44.075 INFO: shutting down applications... 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3078532 ]] 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3078532 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3078532 00:05:44.075 14:51:02 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:44.653 14:51:03 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:44.653 14:51:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:44.653 14:51:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3078532 00:05:44.653 14:51:03 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:44.653 14:51:03 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:44.653 14:51:03 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:44.653 14:51:03 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:44.653 SPDK target shutdown done 00:05:44.653 14:51:03 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:44.653 Success 00:05:44.653 00:05:44.653 real 0m1.523s 00:05:44.653 user 0m1.441s 00:05:44.653 sys 0m0.393s 00:05:44.653 14:51:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.653 14:51:03 -- common/autotest_common.sh@10 -- # set +x 00:05:44.653 ************************************ 00:05:44.653 END TEST json_config_extra_key 00:05:44.653 ************************************ 00:05:44.653 14:51:03 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.653 14:51:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.653 14:51:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.653 14:51:03 -- common/autotest_common.sh@10 -- # set +x 00:05:44.653 ************************************ 00:05:44.653 START TEST alias_rpc 00:05:44.653 ************************************ 00:05:44.653 14:51:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.653 * Looking for test storage... 00:05:44.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:44.653 14:51:03 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.653 14:51:03 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3078861 00:05:44.653 14:51:03 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3078861 00:05:44.653 14:51:03 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.653 14:51:03 -- common/autotest_common.sh@819 -- # '[' -z 3078861 ']' 00:05:44.653 14:51:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.653 14:51:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.653 14:51:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.653 14:51:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.653 14:51:03 -- common/autotest_common.sh@10 -- # set +x 00:05:44.937 [2024-06-11 14:51:03.491788] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:44.937 [2024-06-11 14:51:03.491852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3078861 ] 00:05:44.937 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.937 [2024-06-11 14:51:03.580744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.937 [2024-06-11 14:51:03.668183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.937 [2024-06-11 14:51:03.668338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.889 14:51:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.889 14:51:04 -- common/autotest_common.sh@852 -- # return 0 00:05:45.889 14:51:04 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:45.889 14:51:04 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3078861 00:05:45.889 14:51:04 -- common/autotest_common.sh@926 -- # '[' -z 3078861 ']' 00:05:45.889 14:51:04 -- common/autotest_common.sh@930 -- # kill -0 3078861 00:05:45.889 14:51:04 -- common/autotest_common.sh@931 -- # uname 00:05:45.889 14:51:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.889 14:51:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3078861 00:05:45.889 14:51:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.889 14:51:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.889 14:51:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3078861' 00:05:45.889 killing process with pid 3078861 00:05:45.889 14:51:04 -- common/autotest_common.sh@945 -- # kill 3078861 00:05:45.889 14:51:04 -- common/autotest_common.sh@950 -- # wait 3078861 00:05:46.456 00:05:46.456 real 0m1.731s 00:05:46.456 user 0m1.998s 00:05:46.456 sys 0m0.459s 00:05:46.456 14:51:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.456 14:51:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.456 ************************************ 00:05:46.456 END TEST alias_rpc 00:05:46.456 ************************************ 00:05:46.456 14:51:05 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:46.456 14:51:05 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.456 14:51:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.456 14:51:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.456 14:51:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.456 ************************************ 00:05:46.456 START TEST spdkcli_tcp 00:05:46.456 ************************************ 00:05:46.456 14:51:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.456 * Looking for test storage... 00:05:46.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:46.456 14:51:05 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:46.456 14:51:05 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:46.456 14:51:05 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:46.456 14:51:05 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:46.456 14:51:05 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:46.456 14:51:05 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:46.456 14:51:05 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:46.456 14:51:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.456 14:51:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.456 14:51:05 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3079195 00:05:46.456 14:51:05 -- spdkcli/tcp.sh@27 -- # waitforlisten 3079195 00:05:46.456 14:51:05 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:46.456 14:51:05 -- common/autotest_common.sh@819 -- # '[' -z 3079195 ']' 00:05:46.456 14:51:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.456 14:51:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.456 14:51:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.456 14:51:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.456 14:51:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.456 [2024-06-11 14:51:05.257206] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:46.456 [2024-06-11 14:51:05.257264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3079195 ] 00:05:46.456 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.714 [2024-06-11 14:51:05.345665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.714 [2024-06-11 14:51:05.434229] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.714 [2024-06-11 14:51:05.434410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.714 [2024-06-11 14:51:05.434415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.650 14:51:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.650 14:51:06 -- common/autotest_common.sh@852 -- # return 0 00:05:47.650 14:51:06 -- spdkcli/tcp.sh@31 -- # socat_pid=3079458 00:05:47.650 14:51:06 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:47.650 14:51:06 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:47.650 [ 00:05:47.650 "bdev_malloc_delete", 00:05:47.650 "bdev_malloc_create", 00:05:47.650 "bdev_null_resize", 00:05:47.650 "bdev_null_delete", 00:05:47.650 "bdev_null_create", 00:05:47.650 "bdev_nvme_cuse_unregister", 00:05:47.650 "bdev_nvme_cuse_register", 00:05:47.650 "bdev_opal_new_user", 00:05:47.650 "bdev_opal_set_lock_state", 00:05:47.650 "bdev_opal_delete", 00:05:47.650 "bdev_opal_get_info", 00:05:47.650 "bdev_opal_create", 00:05:47.650 "bdev_nvme_opal_revert", 00:05:47.650 "bdev_nvme_opal_init", 00:05:47.650 "bdev_nvme_send_cmd", 00:05:47.650 "bdev_nvme_get_path_iostat", 00:05:47.650 "bdev_nvme_get_mdns_discovery_info", 00:05:47.650 "bdev_nvme_stop_mdns_discovery", 00:05:47.650 "bdev_nvme_start_mdns_discovery", 00:05:47.650 "bdev_nvme_set_multipath_policy", 00:05:47.650 "bdev_nvme_set_preferred_path", 00:05:47.650 "bdev_nvme_get_io_paths", 00:05:47.650 "bdev_nvme_remove_error_injection", 00:05:47.650 "bdev_nvme_add_error_injection", 00:05:47.650 "bdev_nvme_get_discovery_info", 00:05:47.650 "bdev_nvme_stop_discovery", 00:05:47.650 "bdev_nvme_start_discovery", 00:05:47.650 "bdev_nvme_get_controller_health_info", 00:05:47.650 "bdev_nvme_disable_controller", 00:05:47.650 "bdev_nvme_enable_controller", 00:05:47.650 "bdev_nvme_reset_controller", 00:05:47.650 "bdev_nvme_get_transport_statistics", 00:05:47.650 "bdev_nvme_apply_firmware", 00:05:47.650 "bdev_nvme_detach_controller", 00:05:47.650 "bdev_nvme_get_controllers", 00:05:47.650 "bdev_nvme_attach_controller", 00:05:47.650 "bdev_nvme_set_hotplug", 00:05:47.650 "bdev_nvme_set_options", 00:05:47.650 "bdev_passthru_delete", 00:05:47.650 "bdev_passthru_create", 00:05:47.650 "bdev_lvol_grow_lvstore", 00:05:47.650 "bdev_lvol_get_lvols", 00:05:47.650 "bdev_lvol_get_lvstores", 00:05:47.650 "bdev_lvol_delete", 00:05:47.650 "bdev_lvol_set_read_only", 00:05:47.651 "bdev_lvol_resize", 00:05:47.651 "bdev_lvol_decouple_parent", 00:05:47.651 "bdev_lvol_inflate", 00:05:47.651 "bdev_lvol_rename", 00:05:47.651 "bdev_lvol_clone_bdev", 00:05:47.651 "bdev_lvol_clone", 00:05:47.651 "bdev_lvol_snapshot", 00:05:47.651 "bdev_lvol_create", 00:05:47.651 "bdev_lvol_delete_lvstore", 00:05:47.651 "bdev_lvol_rename_lvstore", 00:05:47.651 "bdev_lvol_create_lvstore", 00:05:47.651 "bdev_raid_set_options", 00:05:47.651 "bdev_raid_remove_base_bdev", 00:05:47.651 "bdev_raid_add_base_bdev", 00:05:47.651 "bdev_raid_delete", 00:05:47.651 "bdev_raid_create", 00:05:47.651 "bdev_raid_get_bdevs", 00:05:47.651 "bdev_error_inject_error", 00:05:47.651 "bdev_error_delete", 00:05:47.651 "bdev_error_create", 00:05:47.651 "bdev_split_delete", 00:05:47.651 "bdev_split_create", 00:05:47.651 "bdev_delay_delete", 00:05:47.651 "bdev_delay_create", 00:05:47.651 "bdev_delay_update_latency", 00:05:47.651 "bdev_zone_block_delete", 00:05:47.651 "bdev_zone_block_create", 00:05:47.651 "blobfs_create", 00:05:47.651 "blobfs_detect", 00:05:47.651 "blobfs_set_cache_size", 00:05:47.651 "bdev_aio_delete", 00:05:47.651 "bdev_aio_rescan", 00:05:47.651 "bdev_aio_create", 00:05:47.651 "bdev_ftl_set_property", 00:05:47.651 "bdev_ftl_get_properties", 00:05:47.651 "bdev_ftl_get_stats", 00:05:47.651 "bdev_ftl_unmap", 00:05:47.651 "bdev_ftl_unload", 00:05:47.651 "bdev_ftl_delete", 00:05:47.651 "bdev_ftl_load", 00:05:47.651 "bdev_ftl_create", 00:05:47.651 "bdev_virtio_attach_controller", 00:05:47.651 "bdev_virtio_scsi_get_devices", 00:05:47.651 "bdev_virtio_detach_controller", 00:05:47.651 "bdev_virtio_blk_set_hotplug", 00:05:47.651 "bdev_iscsi_delete", 00:05:47.651 "bdev_iscsi_create", 00:05:47.651 "bdev_iscsi_set_options", 00:05:47.651 "accel_error_inject_error", 00:05:47.651 "ioat_scan_accel_module", 00:05:47.651 "dsa_scan_accel_module", 00:05:47.651 "iaa_scan_accel_module", 00:05:47.651 "iscsi_set_options", 00:05:47.651 "iscsi_get_auth_groups", 00:05:47.651 "iscsi_auth_group_remove_secret", 00:05:47.651 "iscsi_auth_group_add_secret", 00:05:47.651 "iscsi_delete_auth_group", 00:05:47.651 "iscsi_create_auth_group", 00:05:47.651 "iscsi_set_discovery_auth", 00:05:47.651 "iscsi_get_options", 00:05:47.651 "iscsi_target_node_request_logout", 00:05:47.651 "iscsi_target_node_set_redirect", 00:05:47.651 "iscsi_target_node_set_auth", 00:05:47.651 "iscsi_target_node_add_lun", 00:05:47.651 "iscsi_get_connections", 00:05:47.651 "iscsi_portal_group_set_auth", 00:05:47.651 "iscsi_start_portal_group", 00:05:47.651 "iscsi_delete_portal_group", 00:05:47.651 "iscsi_create_portal_group", 00:05:47.651 "iscsi_get_portal_groups", 00:05:47.651 "iscsi_delete_target_node", 00:05:47.651 "iscsi_target_node_remove_pg_ig_maps", 00:05:47.651 "iscsi_target_node_add_pg_ig_maps", 00:05:47.651 "iscsi_create_target_node", 00:05:47.651 "iscsi_get_target_nodes", 00:05:47.651 "iscsi_delete_initiator_group", 00:05:47.651 "iscsi_initiator_group_remove_initiators", 00:05:47.651 "iscsi_initiator_group_add_initiators", 00:05:47.651 "iscsi_create_initiator_group", 00:05:47.651 "iscsi_get_initiator_groups", 00:05:47.651 "nvmf_set_crdt", 00:05:47.651 "nvmf_set_config", 00:05:47.651 "nvmf_set_max_subsystems", 00:05:47.651 "nvmf_subsystem_get_listeners", 00:05:47.651 "nvmf_subsystem_get_qpairs", 00:05:47.651 "nvmf_subsystem_get_controllers", 00:05:47.651 "nvmf_get_stats", 00:05:47.651 "nvmf_get_transports", 00:05:47.651 "nvmf_create_transport", 00:05:47.651 "nvmf_get_targets", 00:05:47.651 "nvmf_delete_target", 00:05:47.651 "nvmf_create_target", 00:05:47.651 "nvmf_subsystem_allow_any_host", 00:05:47.651 "nvmf_subsystem_remove_host", 00:05:47.651 "nvmf_subsystem_add_host", 00:05:47.651 "nvmf_subsystem_remove_ns", 00:05:47.651 "nvmf_subsystem_add_ns", 00:05:47.651 "nvmf_subsystem_listener_set_ana_state", 00:05:47.651 "nvmf_discovery_get_referrals", 00:05:47.651 "nvmf_discovery_remove_referral", 00:05:47.651 "nvmf_discovery_add_referral", 00:05:47.651 "nvmf_subsystem_remove_listener", 00:05:47.651 "nvmf_subsystem_add_listener", 00:05:47.651 "nvmf_delete_subsystem", 00:05:47.651 "nvmf_create_subsystem", 00:05:47.651 "nvmf_get_subsystems", 00:05:47.651 "env_dpdk_get_mem_stats", 00:05:47.651 "nbd_get_disks", 00:05:47.651 "nbd_stop_disk", 00:05:47.651 "nbd_start_disk", 00:05:47.651 "ublk_recover_disk", 00:05:47.651 "ublk_get_disks", 00:05:47.651 "ublk_stop_disk", 00:05:47.651 "ublk_start_disk", 00:05:47.651 "ublk_destroy_target", 00:05:47.651 "ublk_create_target", 00:05:47.651 "virtio_blk_create_transport", 00:05:47.651 "virtio_blk_get_transports", 00:05:47.651 "vhost_controller_set_coalescing", 00:05:47.651 "vhost_get_controllers", 00:05:47.651 "vhost_delete_controller", 00:05:47.651 "vhost_create_blk_controller", 00:05:47.651 "vhost_scsi_controller_remove_target", 00:05:47.651 "vhost_scsi_controller_add_target", 00:05:47.651 "vhost_start_scsi_controller", 00:05:47.651 "vhost_create_scsi_controller", 00:05:47.651 "thread_set_cpumask", 00:05:47.651 "framework_get_scheduler", 00:05:47.651 "framework_set_scheduler", 00:05:47.651 "framework_get_reactors", 00:05:47.651 "thread_get_io_channels", 00:05:47.651 "thread_get_pollers", 00:05:47.651 "thread_get_stats", 00:05:47.651 "framework_monitor_context_switch", 00:05:47.651 "spdk_kill_instance", 00:05:47.651 "log_enable_timestamps", 00:05:47.651 "log_get_flags", 00:05:47.651 "log_clear_flag", 00:05:47.651 "log_set_flag", 00:05:47.651 "log_get_level", 00:05:47.651 "log_set_level", 00:05:47.651 "log_get_print_level", 00:05:47.651 "log_set_print_level", 00:05:47.651 "framework_enable_cpumask_locks", 00:05:47.651 "framework_disable_cpumask_locks", 00:05:47.651 "framework_wait_init", 00:05:47.651 "framework_start_init", 00:05:47.651 "scsi_get_devices", 00:05:47.651 "bdev_get_histogram", 00:05:47.651 "bdev_enable_histogram", 00:05:47.651 "bdev_set_qos_limit", 00:05:47.651 "bdev_set_qd_sampling_period", 00:05:47.651 "bdev_get_bdevs", 00:05:47.651 "bdev_reset_iostat", 00:05:47.651 "bdev_get_iostat", 00:05:47.651 "bdev_examine", 00:05:47.651 "bdev_wait_for_examine", 00:05:47.651 "bdev_set_options", 00:05:47.651 "notify_get_notifications", 00:05:47.651 "notify_get_types", 00:05:47.651 "accel_get_stats", 00:05:47.651 "accel_set_options", 00:05:47.651 "accel_set_driver", 00:05:47.651 "accel_crypto_key_destroy", 00:05:47.651 "accel_crypto_keys_get", 00:05:47.651 "accel_crypto_key_create", 00:05:47.651 "accel_assign_opc", 00:05:47.651 "accel_get_module_info", 00:05:47.651 "accel_get_opc_assignments", 00:05:47.651 "vmd_rescan", 00:05:47.651 "vmd_remove_device", 00:05:47.651 "vmd_enable", 00:05:47.651 "sock_set_default_impl", 00:05:47.651 "sock_impl_set_options", 00:05:47.651 "sock_impl_get_options", 00:05:47.651 "iobuf_get_stats", 00:05:47.651 "iobuf_set_options", 00:05:47.651 "framework_get_pci_devices", 00:05:47.651 "framework_get_config", 00:05:47.651 "framework_get_subsystems", 00:05:47.651 "trace_get_info", 00:05:47.651 "trace_get_tpoint_group_mask", 00:05:47.651 "trace_disable_tpoint_group", 00:05:47.651 "trace_enable_tpoint_group", 00:05:47.651 "trace_clear_tpoint_mask", 00:05:47.651 "trace_set_tpoint_mask", 00:05:47.651 "spdk_get_version", 00:05:47.651 "rpc_get_methods" 00:05:47.651 ] 00:05:47.651 14:51:06 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:47.651 14:51:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.651 14:51:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.651 14:51:06 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:47.651 14:51:06 -- spdkcli/tcp.sh@38 -- # killprocess 3079195 00:05:47.651 14:51:06 -- common/autotest_common.sh@926 -- # '[' -z 3079195 ']' 00:05:47.652 14:51:06 -- common/autotest_common.sh@930 -- # kill -0 3079195 00:05:47.652 14:51:06 -- common/autotest_common.sh@931 -- # uname 00:05:47.652 14:51:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.652 14:51:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3079195 00:05:47.910 14:51:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:47.911 14:51:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:47.911 14:51:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3079195' 00:05:47.911 killing process with pid 3079195 00:05:47.911 14:51:06 -- common/autotest_common.sh@945 -- # kill 3079195 00:05:47.911 14:51:06 -- common/autotest_common.sh@950 -- # wait 3079195 00:05:48.170 00:05:48.170 real 0m1.732s 00:05:48.170 user 0m3.344s 00:05:48.170 sys 0m0.465s 00:05:48.170 14:51:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.170 14:51:06 -- common/autotest_common.sh@10 -- # set +x 00:05:48.170 ************************************ 00:05:48.170 END TEST spdkcli_tcp 00:05:48.170 ************************************ 00:05:48.170 14:51:06 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.170 14:51:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.170 14:51:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.170 14:51:06 -- common/autotest_common.sh@10 -- # set +x 00:05:48.170 ************************************ 00:05:48.170 START TEST dpdk_mem_utility 00:05:48.170 ************************************ 00:05:48.170 14:51:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.170 * Looking for test storage... 00:05:48.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:48.170 14:51:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.170 14:51:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3079542 00:05:48.170 14:51:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3079542 00:05:48.170 14:51:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.170 14:51:06 -- common/autotest_common.sh@819 -- # '[' -z 3079542 ']' 00:05:48.170 14:51:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.170 14:51:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.170 14:51:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.170 14:51:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.170 14:51:06 -- common/autotest_common.sh@10 -- # set +x 00:05:48.429 [2024-06-11 14:51:07.038869] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:48.429 [2024-06-11 14:51:07.038931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3079542 ] 00:05:48.429 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.429 [2024-06-11 14:51:07.127724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.429 [2024-06-11 14:51:07.213689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.429 [2024-06-11 14:51:07.213846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.366 14:51:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.366 14:51:07 -- common/autotest_common.sh@852 -- # return 0 00:05:49.366 14:51:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:49.366 14:51:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:49.366 14:51:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.366 14:51:07 -- common/autotest_common.sh@10 -- # set +x 00:05:49.366 { 00:05:49.366 "filename": "/tmp/spdk_mem_dump.txt" 00:05:49.366 } 00:05:49.366 14:51:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.366 14:51:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:49.366 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:49.366 1 heaps totaling size 814.000000 MiB 00:05:49.366 size: 814.000000 MiB heap id: 0 00:05:49.366 end heaps---------- 00:05:49.366 8 mempools totaling size 598.116089 MiB 00:05:49.366 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:49.366 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:49.366 size: 84.521057 MiB name: bdev_io_3079542 00:05:49.366 size: 51.011292 MiB name: evtpool_3079542 00:05:49.366 size: 50.003479 MiB name: msgpool_3079542 00:05:49.366 size: 21.763794 MiB name: PDU_Pool 00:05:49.366 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:49.366 size: 0.026123 MiB name: Session_Pool 00:05:49.366 end mempools------- 00:05:49.366 6 memzones totaling size 4.142822 MiB 00:05:49.366 size: 1.000366 MiB name: RG_ring_0_3079542 00:05:49.366 size: 1.000366 MiB name: RG_ring_1_3079542 00:05:49.366 size: 1.000366 MiB name: RG_ring_4_3079542 00:05:49.366 size: 1.000366 MiB name: RG_ring_5_3079542 00:05:49.366 size: 0.125366 MiB name: RG_ring_2_3079542 00:05:49.366 size: 0.015991 MiB name: RG_ring_3_3079542 00:05:49.366 end memzones------- 00:05:49.366 14:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:49.366 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:49.366 list of free elements. size: 12.519348 MiB 00:05:49.366 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:49.366 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:49.366 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:49.366 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:49.366 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:49.366 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:49.366 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:49.366 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:49.366 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:49.366 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:49.366 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:49.366 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:49.366 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:49.366 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:49.366 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:49.366 list of standard malloc elements. size: 199.218079 MiB 00:05:49.366 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:49.366 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:49.366 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:49.366 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:49.366 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:49.366 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:49.366 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:49.366 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:49.366 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:49.366 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:49.366 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:49.366 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:49.366 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:49.366 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:49.366 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:49.367 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:49.367 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:49.367 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:49.367 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:49.367 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:49.367 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:49.367 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:49.367 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:49.367 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:49.367 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:49.367 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:49.367 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:49.367 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:49.367 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:49.367 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:49.367 list of memzone associated elements. size: 602.262573 MiB 00:05:49.367 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:49.367 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:49.367 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:49.367 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:49.367 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:49.367 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3079542_0 00:05:49.367 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:49.367 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3079542_0 00:05:49.367 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:49.367 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3079542_0 00:05:49.367 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:49.367 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:49.367 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:49.367 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:49.367 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:49.367 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3079542 00:05:49.367 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:49.367 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3079542 00:05:49.367 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:49.367 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3079542 00:05:49.367 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:49.367 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:49.367 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:49.367 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:49.367 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:49.367 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:49.367 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:49.367 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:49.367 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:49.367 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3079542 00:05:49.367 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:49.367 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3079542 00:05:49.367 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:49.367 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3079542 00:05:49.367 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:49.367 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3079542 00:05:49.367 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:49.367 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3079542 00:05:49.367 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:49.367 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:49.367 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:49.367 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:49.367 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:49.367 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:49.367 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:49.367 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3079542 00:05:49.367 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:49.367 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:49.367 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:49.367 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:49.367 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:49.367 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3079542 00:05:49.367 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:49.367 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:49.367 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:49.367 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3079542 00:05:49.367 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:49.367 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3079542 00:05:49.367 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:49.367 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:49.367 14:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:49.367 14:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3079542 00:05:49.367 14:51:08 -- common/autotest_common.sh@926 -- # '[' -z 3079542 ']' 00:05:49.367 14:51:08 -- common/autotest_common.sh@930 -- # kill -0 3079542 00:05:49.367 14:51:08 -- common/autotest_common.sh@931 -- # uname 00:05:49.367 14:51:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:49.367 14:51:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3079542 00:05:49.367 14:51:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:49.367 14:51:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:49.367 14:51:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3079542' 00:05:49.367 killing process with pid 3079542 00:05:49.367 14:51:08 -- common/autotest_common.sh@945 -- # kill 3079542 00:05:49.367 14:51:08 -- common/autotest_common.sh@950 -- # wait 3079542 00:05:49.935 00:05:49.935 real 0m1.609s 00:05:49.935 user 0m1.794s 00:05:49.935 sys 0m0.439s 00:05:49.935 14:51:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.935 14:51:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.935 ************************************ 00:05:49.935 END TEST dpdk_mem_utility 00:05:49.935 ************************************ 00:05:49.935 14:51:08 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.935 14:51:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.935 14:51:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.935 14:51:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.935 ************************************ 00:05:49.935 START TEST event 00:05:49.935 ************************************ 00:05:49.935 14:51:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.935 * Looking for test storage... 00:05:49.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.935 14:51:08 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:49.935 14:51:08 -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.935 14:51:08 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.935 14:51:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:49.935 14:51:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.935 14:51:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.935 ************************************ 00:05:49.935 START TEST event_perf 00:05:49.935 ************************************ 00:05:49.935 14:51:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.935 Running I/O for 1 seconds...[2024-06-11 14:51:08.668587] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:49.935 [2024-06-11 14:51:08.668658] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080146 ] 00:05:49.935 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.935 [2024-06-11 14:51:08.759887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.193 [2024-06-11 14:51:08.851862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.193 [2024-06-11 14:51:08.851888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.193 [2024-06-11 14:51:08.852032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.193 [2024-06-11 14:51:08.852040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.128 Running I/O for 1 seconds... 00:05:51.128 lcore 0: 164987 00:05:51.128 lcore 1: 164984 00:05:51.128 lcore 2: 164985 00:05:51.128 lcore 3: 164986 00:05:51.128 done. 00:05:51.128 00:05:51.128 real 0m1.303s 00:05:51.128 user 0m4.196s 00:05:51.128 sys 0m0.100s 00:05:51.128 14:51:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.128 14:51:09 -- common/autotest_common.sh@10 -- # set +x 00:05:51.128 ************************************ 00:05:51.128 END TEST event_perf 00:05:51.128 ************************************ 00:05:51.386 14:51:09 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.386 14:51:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:51.386 14:51:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.386 14:51:09 -- common/autotest_common.sh@10 -- # set +x 00:05:51.386 ************************************ 00:05:51.386 START TEST event_reactor 00:05:51.386 ************************************ 00:05:51.386 14:51:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.386 [2024-06-11 14:51:10.011422] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:51.386 [2024-06-11 14:51:10.011502] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080594 ] 00:05:51.386 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.386 [2024-06-11 14:51:10.100367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.386 [2024-06-11 14:51:10.188045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.762 test_start 00:05:52.762 oneshot 00:05:52.762 tick 100 00:05:52.762 tick 100 00:05:52.762 tick 250 00:05:52.762 tick 100 00:05:52.762 tick 100 00:05:52.762 tick 100 00:05:52.762 tick 250 00:05:52.762 tick 500 00:05:52.762 tick 100 00:05:52.762 tick 100 00:05:52.762 tick 250 00:05:52.762 tick 100 00:05:52.762 tick 100 00:05:52.762 test_end 00:05:52.762 00:05:52.762 real 0m1.291s 00:05:52.762 user 0m1.188s 00:05:52.762 sys 0m0.098s 00:05:52.762 14:51:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.762 14:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:52.762 ************************************ 00:05:52.762 END TEST event_reactor 00:05:52.762 ************************************ 00:05:52.762 14:51:11 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.762 14:51:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:52.762 14:51:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.762 14:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:52.762 ************************************ 00:05:52.762 START TEST event_reactor_perf 00:05:52.762 ************************************ 00:05:52.762 14:51:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.762 [2024-06-11 14:51:11.341871] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:52.762 [2024-06-11 14:51:11.341942] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080827 ] 00:05:52.762 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.762 [2024-06-11 14:51:11.430621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.762 [2024-06-11 14:51:11.516166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.137 test_start 00:05:54.137 test_end 00:05:54.137 Performance: 309988 events per second 00:05:54.137 00:05:54.137 real 0m1.290s 00:05:54.137 user 0m1.189s 00:05:54.137 sys 0m0.095s 00:05:54.137 14:51:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.137 14:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.137 ************************************ 00:05:54.137 END TEST event_reactor_perf 00:05:54.137 ************************************ 00:05:54.137 14:51:12 -- event/event.sh@49 -- # uname -s 00:05:54.137 14:51:12 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:54.137 14:51:12 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:54.137 14:51:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.137 14:51:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.137 14:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.137 ************************************ 00:05:54.137 START TEST event_scheduler 00:05:54.137 ************************************ 00:05:54.137 14:51:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:54.137 * Looking for test storage... 00:05:54.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:54.137 14:51:12 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:54.137 14:51:12 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3081122 00:05:54.137 14:51:12 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.137 14:51:12 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:54.137 14:51:12 -- scheduler/scheduler.sh@37 -- # waitforlisten 3081122 00:05:54.137 14:51:12 -- common/autotest_common.sh@819 -- # '[' -z 3081122 ']' 00:05:54.137 14:51:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.137 14:51:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.137 14:51:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.137 14:51:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.137 14:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.137 [2024-06-11 14:51:12.778671] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:54.137 [2024-06-11 14:51:12.778735] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3081122 ] 00:05:54.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.137 [2024-06-11 14:51:12.843367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.137 [2024-06-11 14:51:12.917622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.137 [2024-06-11 14:51:12.917716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.137 [2024-06-11 14:51:12.917826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.137 [2024-06-11 14:51:12.917827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.137 14:51:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.137 14:51:12 -- common/autotest_common.sh@852 -- # return 0 00:05:54.137 14:51:12 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.137 14:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.137 14:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.137 POWER: Env isn't set yet! 00:05:54.137 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:54.137 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.137 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.137 POWER: Attempting to initialise PSTAT power management... 00:05:54.396 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:54.396 POWER: Initialized successfully for lcore 0 power management 00:05:54.396 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:54.396 POWER: Initialized successfully for lcore 1 power management 00:05:54.396 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:54.396 POWER: Initialized successfully for lcore 2 power management 00:05:54.396 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:54.396 POWER: Initialized successfully for lcore 3 power management 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 [2024-06-11 14:51:13.103333] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.396 14:51:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.396 14:51:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 ************************************ 00:05:54.396 START TEST scheduler_create_thread 00:05:54.396 ************************************ 00:05:54.396 14:51:13 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 2 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 3 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 4 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 5 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 6 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 7 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 8 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 9 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.396 10 00:05:54.396 14:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.396 14:51:13 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:54.396 14:51:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.396 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.775 14:51:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.775 14:51:14 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:55.775 14:51:14 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:55.775 14:51:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.775 14:51:14 -- common/autotest_common.sh@10 -- # set +x 00:05:56.712 14:51:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.712 14:51:15 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:56.712 14:51:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.712 14:51:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.650 14:51:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.650 14:51:16 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:57.650 14:51:16 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:57.650 14:51:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.650 14:51:16 -- common/autotest_common.sh@10 -- # set +x 00:05:58.218 14:51:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.218 00:05:58.218 real 0m3.891s 00:05:58.218 user 0m0.025s 00:05:58.218 sys 0m0.003s 00:05:58.218 14:51:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.218 14:51:17 -- common/autotest_common.sh@10 -- # set +x 00:05:58.218 ************************************ 00:05:58.218 END TEST scheduler_create_thread 00:05:58.218 ************************************ 00:05:58.218 14:51:17 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:58.218 14:51:17 -- scheduler/scheduler.sh@46 -- # killprocess 3081122 00:05:58.218 14:51:17 -- common/autotest_common.sh@926 -- # '[' -z 3081122 ']' 00:05:58.218 14:51:17 -- common/autotest_common.sh@930 -- # kill -0 3081122 00:05:58.218 14:51:17 -- common/autotest_common.sh@931 -- # uname 00:05:58.218 14:51:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:58.218 14:51:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3081122 00:05:58.477 14:51:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:58.477 14:51:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:58.477 14:51:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3081122' 00:05:58.477 killing process with pid 3081122 00:05:58.477 14:51:17 -- common/autotest_common.sh@945 -- # kill 3081122 00:05:58.477 14:51:17 -- common/autotest_common.sh@950 -- # wait 3081122 00:05:58.736 [2024-06-11 14:51:17.382888] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:58.736 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:58.736 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:58.736 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:58.736 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:58.736 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:58.736 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:58.736 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:58.736 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:58.995 00:05:58.995 real 0m5.016s 00:05:58.995 user 0m9.650s 00:05:58.995 sys 0m0.302s 00:05:58.995 14:51:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.995 14:51:17 -- common/autotest_common.sh@10 -- # set +x 00:05:58.995 ************************************ 00:05:58.995 END TEST event_scheduler 00:05:58.995 ************************************ 00:05:58.995 14:51:17 -- event/event.sh@51 -- # modprobe -n nbd 00:05:58.995 14:51:17 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:58.995 14:51:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.995 14:51:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.995 14:51:17 -- common/autotest_common.sh@10 -- # set +x 00:05:58.995 ************************************ 00:05:58.995 START TEST app_repeat 00:05:58.995 ************************************ 00:05:58.995 14:51:17 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:58.995 14:51:17 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.995 14:51:17 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.995 14:51:17 -- event/event.sh@13 -- # local nbd_list 00:05:58.995 14:51:17 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.995 14:51:17 -- event/event.sh@14 -- # local bdev_list 00:05:58.995 14:51:17 -- event/event.sh@15 -- # local repeat_times=4 00:05:58.995 14:51:17 -- event/event.sh@17 -- # modprobe nbd 00:05:58.995 14:51:17 -- event/event.sh@19 -- # repeat_pid=3082218 00:05:58.995 14:51:17 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.995 14:51:17 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:58.995 14:51:17 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3082218' 00:05:58.995 Process app_repeat pid: 3082218 00:05:58.995 14:51:17 -- event/event.sh@23 -- # for i in {0..2} 00:05:58.995 14:51:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:58.995 spdk_app_start Round 0 00:05:58.995 14:51:17 -- event/event.sh@25 -- # waitforlisten 3082218 /var/tmp/spdk-nbd.sock 00:05:58.995 14:51:17 -- common/autotest_common.sh@819 -- # '[' -z 3082218 ']' 00:05:58.995 14:51:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.995 14:51:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.995 14:51:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.995 14:51:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.995 14:51:17 -- common/autotest_common.sh@10 -- # set +x 00:05:58.995 [2024-06-11 14:51:17.749043] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:58.996 [2024-06-11 14:51:17.749109] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082218 ] 00:05:58.996 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.996 [2024-06-11 14:51:17.831276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.255 [2024-06-11 14:51:17.925051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.255 [2024-06-11 14:51:17.925058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.191 14:51:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.191 14:51:18 -- common/autotest_common.sh@852 -- # return 0 00:06:00.191 14:51:18 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.191 Malloc0 00:06:00.191 14:51:18 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.451 Malloc1 00:06:00.451 14:51:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@12 -- # local i 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.451 14:51:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.709 /dev/nbd0 00:06:00.709 14:51:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.709 14:51:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.710 14:51:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:00.710 14:51:19 -- common/autotest_common.sh@857 -- # local i 00:06:00.710 14:51:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:00.710 14:51:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:00.710 14:51:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:00.710 14:51:19 -- common/autotest_common.sh@861 -- # break 00:06:00.710 14:51:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:00.710 14:51:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:00.710 14:51:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.710 1+0 records in 00:06:00.710 1+0 records out 00:06:00.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019789 s, 20.7 MB/s 00:06:00.710 14:51:19 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.710 14:51:19 -- common/autotest_common.sh@874 -- # size=4096 00:06:00.710 14:51:19 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.710 14:51:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:00.710 14:51:19 -- common/autotest_common.sh@877 -- # return 0 00:06:00.710 14:51:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.710 14:51:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.710 14:51:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.969 /dev/nbd1 00:06:00.969 14:51:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.969 14:51:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.969 14:51:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:00.969 14:51:19 -- common/autotest_common.sh@857 -- # local i 00:06:00.969 14:51:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:00.969 14:51:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:00.969 14:51:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:00.969 14:51:19 -- common/autotest_common.sh@861 -- # break 00:06:00.969 14:51:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:00.969 14:51:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:00.969 14:51:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.969 1+0 records in 00:06:00.969 1+0 records out 00:06:00.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146046 s, 28.0 MB/s 00:06:00.969 14:51:19 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.969 14:51:19 -- common/autotest_common.sh@874 -- # size=4096 00:06:00.969 14:51:19 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.969 14:51:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:00.969 14:51:19 -- common/autotest_common.sh@877 -- # return 0 00:06:00.969 14:51:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.969 14:51:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.969 14:51:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.969 14:51:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.969 14:51:19 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.228 14:51:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.228 { 00:06:01.228 "nbd_device": "/dev/nbd0", 00:06:01.228 "bdev_name": "Malloc0" 00:06:01.228 }, 00:06:01.228 { 00:06:01.228 "nbd_device": "/dev/nbd1", 00:06:01.228 "bdev_name": "Malloc1" 00:06:01.228 } 00:06:01.228 ]' 00:06:01.228 14:51:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.228 { 00:06:01.228 "nbd_device": "/dev/nbd0", 00:06:01.228 "bdev_name": "Malloc0" 00:06:01.228 }, 00:06:01.228 { 00:06:01.228 "nbd_device": "/dev/nbd1", 00:06:01.228 "bdev_name": "Malloc1" 00:06:01.228 } 00:06:01.228 ]' 00:06:01.228 14:51:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.228 /dev/nbd1' 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.228 /dev/nbd1' 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.228 256+0 records in 00:06:01.228 256+0 records out 00:06:01.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00988387 s, 106 MB/s 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.228 14:51:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.487 256+0 records in 00:06:01.487 256+0 records out 00:06:01.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207768 s, 50.5 MB/s 00:06:01.487 14:51:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.487 14:51:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.487 256+0 records in 00:06:01.488 256+0 records out 00:06:01.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207294 s, 50.6 MB/s 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@51 -- # local i 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.488 14:51:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.747 14:51:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.747 14:51:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.747 14:51:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.747 14:51:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.747 14:51:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.747 14:51:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.747 14:51:20 -- bdev/nbd_common.sh@41 -- # break 00:06:01.747 14:51:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.747 14:51:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.747 14:51:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@41 -- # break 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.006 14:51:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@65 -- # true 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.265 14:51:20 -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.265 14:51:20 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.523 14:51:21 -- event/event.sh@35 -- # sleep 3 00:06:02.782 [2024-06-11 14:51:21.427640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.783 [2024-06-11 14:51:21.506745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.783 [2024-06-11 14:51:21.506749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.783 [2024-06-11 14:51:21.552037] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.783 [2024-06-11 14:51:21.552086] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.070 14:51:24 -- event/event.sh@23 -- # for i in {0..2} 00:06:06.070 14:51:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:06.070 spdk_app_start Round 1 00:06:06.070 14:51:24 -- event/event.sh@25 -- # waitforlisten 3082218 /var/tmp/spdk-nbd.sock 00:06:06.070 14:51:24 -- common/autotest_common.sh@819 -- # '[' -z 3082218 ']' 00:06:06.070 14:51:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.070 14:51:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:06.070 14:51:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.070 14:51:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:06.070 14:51:24 -- common/autotest_common.sh@10 -- # set +x 00:06:06.070 14:51:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:06.070 14:51:24 -- common/autotest_common.sh@852 -- # return 0 00:06:06.070 14:51:24 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.070 Malloc0 00:06:06.070 14:51:24 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.070 Malloc1 00:06:06.328 14:51:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@12 -- # local i 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.328 14:51:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.328 /dev/nbd0 00:06:06.328 14:51:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.587 14:51:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.587 14:51:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:06.587 14:51:25 -- common/autotest_common.sh@857 -- # local i 00:06:06.587 14:51:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:06.587 14:51:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:06.587 14:51:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:06.587 14:51:25 -- common/autotest_common.sh@861 -- # break 00:06:06.587 14:51:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:06.587 14:51:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:06.587 14:51:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.587 1+0 records in 00:06:06.587 1+0 records out 00:06:06.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161122 s, 25.4 MB/s 00:06:06.587 14:51:25 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.587 14:51:25 -- common/autotest_common.sh@874 -- # size=4096 00:06:06.587 14:51:25 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.587 14:51:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:06.587 14:51:25 -- common/autotest_common.sh@877 -- # return 0 00:06:06.587 14:51:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.587 14:51:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.587 14:51:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.587 /dev/nbd1 00:06:06.846 14:51:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.846 14:51:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.846 14:51:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:06.846 14:51:25 -- common/autotest_common.sh@857 -- # local i 00:06:06.846 14:51:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:06.846 14:51:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:06.846 14:51:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:06.846 14:51:25 -- common/autotest_common.sh@861 -- # break 00:06:06.846 14:51:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:06.846 14:51:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:06.846 14:51:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.846 1+0 records in 00:06:06.846 1+0 records out 00:06:06.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200206 s, 20.5 MB/s 00:06:06.846 14:51:25 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.846 14:51:25 -- common/autotest_common.sh@874 -- # size=4096 00:06:06.846 14:51:25 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.846 14:51:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:06.846 14:51:25 -- common/autotest_common.sh@877 -- # return 0 00:06:06.846 14:51:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.846 14:51:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.846 14:51:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.846 14:51:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.847 14:51:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.106 { 00:06:07.106 "nbd_device": "/dev/nbd0", 00:06:07.106 "bdev_name": "Malloc0" 00:06:07.106 }, 00:06:07.106 { 00:06:07.106 "nbd_device": "/dev/nbd1", 00:06:07.106 "bdev_name": "Malloc1" 00:06:07.106 } 00:06:07.106 ]' 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.106 { 00:06:07.106 "nbd_device": "/dev/nbd0", 00:06:07.106 "bdev_name": "Malloc0" 00:06:07.106 }, 00:06:07.106 { 00:06:07.106 "nbd_device": "/dev/nbd1", 00:06:07.106 "bdev_name": "Malloc1" 00:06:07.106 } 00:06:07.106 ]' 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.106 /dev/nbd1' 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.106 /dev/nbd1' 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.106 256+0 records in 00:06:07.106 256+0 records out 00:06:07.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103002 s, 102 MB/s 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.106 256+0 records in 00:06:07.106 256+0 records out 00:06:07.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196863 s, 53.3 MB/s 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.106 256+0 records in 00:06:07.106 256+0 records out 00:06:07.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204982 s, 51.2 MB/s 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@51 -- # local i 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.106 14:51:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.366 14:51:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.366 14:51:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.366 14:51:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.366 14:51:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.366 14:51:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.366 14:51:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.366 14:51:26 -- bdev/nbd_common.sh@41 -- # break 00:06:07.366 14:51:26 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.366 14:51:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.366 14:51:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@41 -- # break 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.624 14:51:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@65 -- # true 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.884 14:51:26 -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.884 14:51:26 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.143 14:51:26 -- event/event.sh@35 -- # sleep 3 00:06:08.403 [2024-06-11 14:51:27.132334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.403 [2024-06-11 14:51:27.215601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.403 [2024-06-11 14:51:27.215606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.662 [2024-06-11 14:51:27.260058] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.662 [2024-06-11 14:51:27.260099] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.197 14:51:29 -- event/event.sh@23 -- # for i in {0..2} 00:06:11.197 14:51:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.197 spdk_app_start Round 2 00:06:11.197 14:51:29 -- event/event.sh@25 -- # waitforlisten 3082218 /var/tmp/spdk-nbd.sock 00:06:11.197 14:51:29 -- common/autotest_common.sh@819 -- # '[' -z 3082218 ']' 00:06:11.197 14:51:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.197 14:51:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.197 14:51:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.197 14:51:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.197 14:51:29 -- common/autotest_common.sh@10 -- # set +x 00:06:11.455 14:51:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.455 14:51:30 -- common/autotest_common.sh@852 -- # return 0 00:06:11.455 14:51:30 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.714 Malloc0 00:06:11.714 14:51:30 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.973 Malloc1 00:06:11.973 14:51:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@12 -- # local i 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.973 14:51:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.233 /dev/nbd0 00:06:12.233 14:51:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.233 14:51:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.233 14:51:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:12.233 14:51:30 -- common/autotest_common.sh@857 -- # local i 00:06:12.233 14:51:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.233 14:51:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.233 14:51:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:12.233 14:51:30 -- common/autotest_common.sh@861 -- # break 00:06:12.233 14:51:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.233 14:51:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.233 14:51:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.233 1+0 records in 00:06:12.233 1+0 records out 00:06:12.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203011 s, 20.2 MB/s 00:06:12.233 14:51:30 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.233 14:51:30 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.233 14:51:30 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.233 14:51:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.233 14:51:30 -- common/autotest_common.sh@877 -- # return 0 00:06:12.233 14:51:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.233 14:51:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.233 14:51:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.492 /dev/nbd1 00:06:12.492 14:51:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.492 14:51:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.492 14:51:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:12.492 14:51:31 -- common/autotest_common.sh@857 -- # local i 00:06:12.492 14:51:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.492 14:51:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.492 14:51:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:12.492 14:51:31 -- common/autotest_common.sh@861 -- # break 00:06:12.492 14:51:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.492 14:51:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.492 14:51:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.492 1+0 records in 00:06:12.492 1+0 records out 00:06:12.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200988 s, 20.4 MB/s 00:06:12.492 14:51:31 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.492 14:51:31 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.492 14:51:31 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.492 14:51:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.492 14:51:31 -- common/autotest_common.sh@877 -- # return 0 00:06:12.492 14:51:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.492 14:51:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.492 14:51:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.492 14:51:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.492 14:51:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.751 { 00:06:12.751 "nbd_device": "/dev/nbd0", 00:06:12.751 "bdev_name": "Malloc0" 00:06:12.751 }, 00:06:12.751 { 00:06:12.751 "nbd_device": "/dev/nbd1", 00:06:12.751 "bdev_name": "Malloc1" 00:06:12.751 } 00:06:12.751 ]' 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.751 { 00:06:12.751 "nbd_device": "/dev/nbd0", 00:06:12.751 "bdev_name": "Malloc0" 00:06:12.751 }, 00:06:12.751 { 00:06:12.751 "nbd_device": "/dev/nbd1", 00:06:12.751 "bdev_name": "Malloc1" 00:06:12.751 } 00:06:12.751 ]' 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.751 /dev/nbd1' 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.751 /dev/nbd1' 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.751 256+0 records in 00:06:12.751 256+0 records out 00:06:12.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104286 s, 101 MB/s 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.751 256+0 records in 00:06:12.751 256+0 records out 00:06:12.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190659 s, 55.0 MB/s 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.751 256+0 records in 00:06:12.751 256+0 records out 00:06:12.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203546 s, 51.5 MB/s 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.751 14:51:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.752 14:51:31 -- bdev/nbd_common.sh@51 -- # local i 00:06:12.752 14:51:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.752 14:51:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.010 14:51:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.010 14:51:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.010 14:51:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.010 14:51:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.010 14:51:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.010 14:51:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.010 14:51:31 -- bdev/nbd_common.sh@41 -- # break 00:06:13.010 14:51:31 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.010 14:51:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.010 14:51:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.268 14:51:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.268 14:51:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.268 14:51:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.268 14:51:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.268 14:51:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.268 14:51:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.268 14:51:31 -- bdev/nbd_common.sh@41 -- # break 00:06:13.269 14:51:31 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.269 14:51:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.269 14:51:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.269 14:51:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@65 -- # true 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.526 14:51:32 -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.526 14:51:32 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.784 14:51:32 -- event/event.sh@35 -- # sleep 3 00:06:14.042 [2024-06-11 14:51:32.720969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.042 [2024-06-11 14:51:32.799064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.042 [2024-06-11 14:51:32.799069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.042 [2024-06-11 14:51:32.844433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.042 [2024-06-11 14:51:32.844481] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.655 14:51:35 -- event/event.sh@38 -- # waitforlisten 3082218 /var/tmp/spdk-nbd.sock 00:06:16.655 14:51:35 -- common/autotest_common.sh@819 -- # '[' -z 3082218 ']' 00:06:16.655 14:51:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.655 14:51:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.655 14:51:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.655 14:51:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.655 14:51:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.913 14:51:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.913 14:51:35 -- common/autotest_common.sh@852 -- # return 0 00:06:16.913 14:51:35 -- event/event.sh@39 -- # killprocess 3082218 00:06:16.913 14:51:35 -- common/autotest_common.sh@926 -- # '[' -z 3082218 ']' 00:06:16.913 14:51:35 -- common/autotest_common.sh@930 -- # kill -0 3082218 00:06:16.913 14:51:35 -- common/autotest_common.sh@931 -- # uname 00:06:16.913 14:51:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.913 14:51:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3082218 00:06:17.172 14:51:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:17.172 14:51:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:17.172 14:51:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3082218' 00:06:17.172 killing process with pid 3082218 00:06:17.172 14:51:35 -- common/autotest_common.sh@945 -- # kill 3082218 00:06:17.172 14:51:35 -- common/autotest_common.sh@950 -- # wait 3082218 00:06:17.172 spdk_app_start is called in Round 0. 00:06:17.172 Shutdown signal received, stop current app iteration 00:06:17.172 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:17.172 spdk_app_start is called in Round 1. 00:06:17.172 Shutdown signal received, stop current app iteration 00:06:17.172 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:17.172 spdk_app_start is called in Round 2. 00:06:17.172 Shutdown signal received, stop current app iteration 00:06:17.172 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:17.172 spdk_app_start is called in Round 3. 00:06:17.172 Shutdown signal received, stop current app iteration 00:06:17.172 14:51:35 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.172 14:51:35 -- event/event.sh@42 -- # return 0 00:06:17.172 00:06:17.172 real 0m18.256s 00:06:17.172 user 0m40.374s 00:06:17.172 sys 0m2.843s 00:06:17.172 14:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.173 14:51:35 -- common/autotest_common.sh@10 -- # set +x 00:06:17.173 ************************************ 00:06:17.173 END TEST app_repeat 00:06:17.173 ************************************ 00:06:17.173 14:51:36 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.173 14:51:36 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.173 14:51:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.173 14:51:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.173 14:51:36 -- common/autotest_common.sh@10 -- # set +x 00:06:17.173 ************************************ 00:06:17.173 START TEST cpu_locks 00:06:17.173 ************************************ 00:06:17.173 14:51:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.431 * Looking for test storage... 00:06:17.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:17.431 14:51:36 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:17.431 14:51:36 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:17.431 14:51:36 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:17.431 14:51:36 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:17.431 14:51:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.431 14:51:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.431 14:51:36 -- common/autotest_common.sh@10 -- # set +x 00:06:17.431 ************************************ 00:06:17.431 START TEST default_locks 00:06:17.431 ************************************ 00:06:17.431 14:51:36 -- common/autotest_common.sh@1104 -- # default_locks 00:06:17.431 14:51:36 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3085888 00:06:17.431 14:51:36 -- event/cpu_locks.sh@47 -- # waitforlisten 3085888 00:06:17.431 14:51:36 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.431 14:51:36 -- common/autotest_common.sh@819 -- # '[' -z 3085888 ']' 00:06:17.431 14:51:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.431 14:51:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.431 14:51:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.431 14:51:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.431 14:51:36 -- common/autotest_common.sh@10 -- # set +x 00:06:17.432 [2024-06-11 14:51:36.156746] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:17.432 [2024-06-11 14:51:36.156820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085888 ] 00:06:17.432 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.432 [2024-06-11 14:51:36.246471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.690 [2024-06-11 14:51:36.338126] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.690 [2024-06-11 14:51:36.338271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.257 14:51:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.257 14:51:37 -- common/autotest_common.sh@852 -- # return 0 00:06:18.257 14:51:37 -- event/cpu_locks.sh@49 -- # locks_exist 3085888 00:06:18.257 14:51:37 -- event/cpu_locks.sh@22 -- # lslocks -p 3085888 00:06:18.257 14:51:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.515 lslocks: write error 00:06:18.515 14:51:37 -- event/cpu_locks.sh@50 -- # killprocess 3085888 00:06:18.515 14:51:37 -- common/autotest_common.sh@926 -- # '[' -z 3085888 ']' 00:06:18.515 14:51:37 -- common/autotest_common.sh@930 -- # kill -0 3085888 00:06:18.515 14:51:37 -- common/autotest_common.sh@931 -- # uname 00:06:18.515 14:51:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:18.515 14:51:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3085888 00:06:18.515 14:51:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:18.515 14:51:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:18.515 14:51:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3085888' 00:06:18.516 killing process with pid 3085888 00:06:18.516 14:51:37 -- common/autotest_common.sh@945 -- # kill 3085888 00:06:18.516 14:51:37 -- common/autotest_common.sh@950 -- # wait 3085888 00:06:19.084 14:51:37 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3085888 00:06:19.084 14:51:37 -- common/autotest_common.sh@640 -- # local es=0 00:06:19.084 14:51:37 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3085888 00:06:19.084 14:51:37 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:19.084 14:51:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.084 14:51:37 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:19.084 14:51:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.084 14:51:37 -- common/autotest_common.sh@643 -- # waitforlisten 3085888 00:06:19.084 14:51:37 -- common/autotest_common.sh@819 -- # '[' -z 3085888 ']' 00:06:19.084 14:51:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.084 14:51:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.084 14:51:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.084 14:51:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.084 14:51:37 -- common/autotest_common.sh@10 -- # set +x 00:06:19.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3085888) - No such process 00:06:19.084 ERROR: process (pid: 3085888) is no longer running 00:06:19.084 14:51:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.084 14:51:37 -- common/autotest_common.sh@852 -- # return 1 00:06:19.084 14:51:37 -- common/autotest_common.sh@643 -- # es=1 00:06:19.084 14:51:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:19.084 14:51:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:19.084 14:51:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:19.084 14:51:37 -- event/cpu_locks.sh@54 -- # no_locks 00:06:19.084 14:51:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.084 14:51:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.084 14:51:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.084 00:06:19.084 real 0m1.591s 00:06:19.084 user 0m1.755s 00:06:19.084 sys 0m0.507s 00:06:19.084 14:51:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.084 14:51:37 -- common/autotest_common.sh@10 -- # set +x 00:06:19.084 ************************************ 00:06:19.084 END TEST default_locks 00:06:19.084 ************************************ 00:06:19.084 14:51:37 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:19.084 14:51:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:19.084 14:51:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.084 14:51:37 -- common/autotest_common.sh@10 -- # set +x 00:06:19.084 ************************************ 00:06:19.084 START TEST default_locks_via_rpc 00:06:19.084 ************************************ 00:06:19.084 14:51:37 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:19.084 14:51:37 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.084 14:51:37 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3086199 00:06:19.084 14:51:37 -- event/cpu_locks.sh@63 -- # waitforlisten 3086199 00:06:19.084 14:51:37 -- common/autotest_common.sh@819 -- # '[' -z 3086199 ']' 00:06:19.084 14:51:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.084 14:51:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.084 14:51:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.084 14:51:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.084 14:51:37 -- common/autotest_common.sh@10 -- # set +x 00:06:19.084 [2024-06-11 14:51:37.777115] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:19.084 [2024-06-11 14:51:37.777177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086199 ] 00:06:19.084 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.084 [2024-06-11 14:51:37.865732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.343 [2024-06-11 14:51:37.954076] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.343 [2024-06-11 14:51:37.954219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.911 14:51:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.911 14:51:38 -- common/autotest_common.sh@852 -- # return 0 00:06:19.911 14:51:38 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.911 14:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.911 14:51:38 -- common/autotest_common.sh@10 -- # set +x 00:06:19.911 14:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.911 14:51:38 -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.911 14:51:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.911 14:51:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.911 14:51:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.911 14:51:38 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.911 14:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.911 14:51:38 -- common/autotest_common.sh@10 -- # set +x 00:06:19.911 14:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.911 14:51:38 -- event/cpu_locks.sh@71 -- # locks_exist 3086199 00:06:19.911 14:51:38 -- event/cpu_locks.sh@22 -- # lslocks -p 3086199 00:06:19.911 14:51:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.170 14:51:38 -- event/cpu_locks.sh@73 -- # killprocess 3086199 00:06:20.170 14:51:38 -- common/autotest_common.sh@926 -- # '[' -z 3086199 ']' 00:06:20.170 14:51:38 -- common/autotest_common.sh@930 -- # kill -0 3086199 00:06:20.170 14:51:38 -- common/autotest_common.sh@931 -- # uname 00:06:20.170 14:51:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.170 14:51:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3086199 00:06:20.170 14:51:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:20.170 14:51:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:20.170 14:51:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3086199' 00:06:20.170 killing process with pid 3086199 00:06:20.170 14:51:38 -- common/autotest_common.sh@945 -- # kill 3086199 00:06:20.170 14:51:38 -- common/autotest_common.sh@950 -- # wait 3086199 00:06:20.738 00:06:20.738 real 0m1.615s 00:06:20.738 user 0m1.687s 00:06:20.738 sys 0m0.539s 00:06:20.738 14:51:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.738 14:51:39 -- common/autotest_common.sh@10 -- # set +x 00:06:20.738 ************************************ 00:06:20.738 END TEST default_locks_via_rpc 00:06:20.738 ************************************ 00:06:20.738 14:51:39 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.738 14:51:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.738 14:51:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.738 14:51:39 -- common/autotest_common.sh@10 -- # set +x 00:06:20.738 ************************************ 00:06:20.738 START TEST non_locking_app_on_locked_coremask 00:06:20.738 ************************************ 00:06:20.738 14:51:39 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:20.738 14:51:39 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3086504 00:06:20.738 14:51:39 -- event/cpu_locks.sh@81 -- # waitforlisten 3086504 /var/tmp/spdk.sock 00:06:20.738 14:51:39 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.738 14:51:39 -- common/autotest_common.sh@819 -- # '[' -z 3086504 ']' 00:06:20.738 14:51:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.738 14:51:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.738 14:51:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.738 14:51:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.738 14:51:39 -- common/autotest_common.sh@10 -- # set +x 00:06:20.738 [2024-06-11 14:51:39.441932] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:20.738 [2024-06-11 14:51:39.441991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086504 ] 00:06:20.738 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.738 [2024-06-11 14:51:39.529989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.998 [2024-06-11 14:51:39.618585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.998 [2024-06-11 14:51:39.618731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.566 14:51:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.566 14:51:40 -- common/autotest_common.sh@852 -- # return 0 00:06:21.566 14:51:40 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3086710 00:06:21.566 14:51:40 -- event/cpu_locks.sh@85 -- # waitforlisten 3086710 /var/tmp/spdk2.sock 00:06:21.566 14:51:40 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.566 14:51:40 -- common/autotest_common.sh@819 -- # '[' -z 3086710 ']' 00:06:21.566 14:51:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.566 14:51:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.566 14:51:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.566 14:51:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.566 14:51:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.825 [2024-06-11 14:51:40.417888] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:21.825 [2024-06-11 14:51:40.417950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086710 ] 00:06:21.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.825 [2024-06-11 14:51:40.539396] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.825 [2024-06-11 14:51:40.539425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.084 [2024-06-11 14:51:40.716214] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.084 [2024-06-11 14:51:40.716360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.652 14:51:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.652 14:51:41 -- common/autotest_common.sh@852 -- # return 0 00:06:22.652 14:51:41 -- event/cpu_locks.sh@87 -- # locks_exist 3086504 00:06:22.652 14:51:41 -- event/cpu_locks.sh@22 -- # lslocks -p 3086504 00:06:22.652 14:51:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.220 lslocks: write error 00:06:23.220 14:51:41 -- event/cpu_locks.sh@89 -- # killprocess 3086504 00:06:23.220 14:51:41 -- common/autotest_common.sh@926 -- # '[' -z 3086504 ']' 00:06:23.220 14:51:41 -- common/autotest_common.sh@930 -- # kill -0 3086504 00:06:23.220 14:51:41 -- common/autotest_common.sh@931 -- # uname 00:06:23.220 14:51:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.220 14:51:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3086504 00:06:23.220 14:51:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.220 14:51:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.220 14:51:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3086504' 00:06:23.220 killing process with pid 3086504 00:06:23.220 14:51:41 -- common/autotest_common.sh@945 -- # kill 3086504 00:06:23.220 14:51:41 -- common/autotest_common.sh@950 -- # wait 3086504 00:06:23.788 14:51:42 -- event/cpu_locks.sh@90 -- # killprocess 3086710 00:06:23.788 14:51:42 -- common/autotest_common.sh@926 -- # '[' -z 3086710 ']' 00:06:23.788 14:51:42 -- common/autotest_common.sh@930 -- # kill -0 3086710 00:06:23.788 14:51:42 -- common/autotest_common.sh@931 -- # uname 00:06:24.047 14:51:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.047 14:51:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3086710 00:06:24.047 14:51:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.047 14:51:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.047 14:51:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3086710' 00:06:24.047 killing process with pid 3086710 00:06:24.047 14:51:42 -- common/autotest_common.sh@945 -- # kill 3086710 00:06:24.047 14:51:42 -- common/autotest_common.sh@950 -- # wait 3086710 00:06:24.307 00:06:24.307 real 0m3.647s 00:06:24.307 user 0m4.069s 00:06:24.307 sys 0m1.048s 00:06:24.307 14:51:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.307 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 ************************************ 00:06:24.307 END TEST non_locking_app_on_locked_coremask 00:06:24.307 ************************************ 00:06:24.307 14:51:43 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.307 14:51:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.307 14:51:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.307 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 ************************************ 00:06:24.307 START TEST locking_app_on_unlocked_coremask 00:06:24.307 ************************************ 00:06:24.307 14:51:43 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:24.307 14:51:43 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3087166 00:06:24.307 14:51:43 -- event/cpu_locks.sh@99 -- # waitforlisten 3087166 /var/tmp/spdk.sock 00:06:24.307 14:51:43 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.307 14:51:43 -- common/autotest_common.sh@819 -- # '[' -z 3087166 ']' 00:06:24.307 14:51:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.307 14:51:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.307 14:51:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.307 14:51:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.307 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 [2024-06-11 14:51:43.127150] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:24.307 [2024-06-11 14:51:43.127214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087166 ] 00:06:24.567 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.567 [2024-06-11 14:51:43.218337] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.567 [2024-06-11 14:51:43.218369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.567 [2024-06-11 14:51:43.302639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.567 [2024-06-11 14:51:43.302798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.504 14:51:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.504 14:51:44 -- common/autotest_common.sh@852 -- # return 0 00:06:25.504 14:51:44 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3087343 00:06:25.504 14:51:44 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.504 14:51:44 -- event/cpu_locks.sh@103 -- # waitforlisten 3087343 /var/tmp/spdk2.sock 00:06:25.504 14:51:44 -- common/autotest_common.sh@819 -- # '[' -z 3087343 ']' 00:06:25.504 14:51:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.504 14:51:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.504 14:51:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.504 14:51:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.504 14:51:44 -- common/autotest_common.sh@10 -- # set +x 00:06:25.504 [2024-06-11 14:51:44.103482] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:25.505 [2024-06-11 14:51:44.103542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087343 ] 00:06:25.505 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.505 [2024-06-11 14:51:44.226836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.771 [2024-06-11 14:51:44.395873] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.771 [2024-06-11 14:51:44.396032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.342 14:51:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.342 14:51:44 -- common/autotest_common.sh@852 -- # return 0 00:06:26.342 14:51:44 -- event/cpu_locks.sh@105 -- # locks_exist 3087343 00:06:26.342 14:51:44 -- event/cpu_locks.sh@22 -- # lslocks -p 3087343 00:06:26.342 14:51:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.910 lslocks: write error 00:06:26.911 14:51:45 -- event/cpu_locks.sh@107 -- # killprocess 3087166 00:06:26.911 14:51:45 -- common/autotest_common.sh@926 -- # '[' -z 3087166 ']' 00:06:26.911 14:51:45 -- common/autotest_common.sh@930 -- # kill -0 3087166 00:06:26.911 14:51:45 -- common/autotest_common.sh@931 -- # uname 00:06:26.911 14:51:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.911 14:51:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3087166 00:06:26.911 14:51:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.911 14:51:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.911 14:51:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3087166' 00:06:26.911 killing process with pid 3087166 00:06:26.911 14:51:45 -- common/autotest_common.sh@945 -- # kill 3087166 00:06:26.911 14:51:45 -- common/autotest_common.sh@950 -- # wait 3087166 00:06:27.479 14:51:46 -- event/cpu_locks.sh@108 -- # killprocess 3087343 00:06:27.479 14:51:46 -- common/autotest_common.sh@926 -- # '[' -z 3087343 ']' 00:06:27.479 14:51:46 -- common/autotest_common.sh@930 -- # kill -0 3087343 00:06:27.479 14:51:46 -- common/autotest_common.sh@931 -- # uname 00:06:27.479 14:51:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.479 14:51:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3087343 00:06:27.738 14:51:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.738 14:51:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.738 14:51:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3087343' 00:06:27.738 killing process with pid 3087343 00:06:27.738 14:51:46 -- common/autotest_common.sh@945 -- # kill 3087343 00:06:27.738 14:51:46 -- common/autotest_common.sh@950 -- # wait 3087343 00:06:27.997 00:06:27.997 real 0m3.619s 00:06:27.997 user 0m3.985s 00:06:27.997 sys 0m1.040s 00:06:27.997 14:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.997 14:51:46 -- common/autotest_common.sh@10 -- # set +x 00:06:27.997 ************************************ 00:06:27.997 END TEST locking_app_on_unlocked_coremask 00:06:27.997 ************************************ 00:06:27.997 14:51:46 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:27.997 14:51:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.997 14:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.997 14:51:46 -- common/autotest_common.sh@10 -- # set +x 00:06:27.997 ************************************ 00:06:27.997 START TEST locking_app_on_locked_coremask 00:06:27.997 ************************************ 00:06:27.997 14:51:46 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:27.997 14:51:46 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3087910 00:06:27.997 14:51:46 -- event/cpu_locks.sh@116 -- # waitforlisten 3087910 /var/tmp/spdk.sock 00:06:27.997 14:51:46 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.997 14:51:46 -- common/autotest_common.sh@819 -- # '[' -z 3087910 ']' 00:06:27.998 14:51:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.998 14:51:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.998 14:51:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.998 14:51:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.998 14:51:46 -- common/autotest_common.sh@10 -- # set +x 00:06:27.998 [2024-06-11 14:51:46.787394] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:27.998 [2024-06-11 14:51:46.787460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087910 ] 00:06:27.998 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.257 [2024-06-11 14:51:46.877266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.257 [2024-06-11 14:51:46.957149] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.257 [2024-06-11 14:51:46.957305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.197 14:51:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.197 14:51:47 -- common/autotest_common.sh@852 -- # return 0 00:06:29.197 14:51:47 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.197 14:51:47 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3088177 00:06:29.197 14:51:47 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3088177 /var/tmp/spdk2.sock 00:06:29.197 14:51:47 -- common/autotest_common.sh@640 -- # local es=0 00:06:29.197 14:51:47 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3088177 /var/tmp/spdk2.sock 00:06:29.197 14:51:47 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:29.197 14:51:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.197 14:51:47 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:29.197 14:51:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.197 14:51:47 -- common/autotest_common.sh@643 -- # waitforlisten 3088177 /var/tmp/spdk2.sock 00:06:29.197 14:51:47 -- common/autotest_common.sh@819 -- # '[' -z 3088177 ']' 00:06:29.197 14:51:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.197 14:51:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.197 14:51:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.197 14:51:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.197 14:51:47 -- common/autotest_common.sh@10 -- # set +x 00:06:29.197 [2024-06-11 14:51:47.761104] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:29.197 [2024-06-11 14:51:47.761167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088177 ] 00:06:29.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.197 [2024-06-11 14:51:47.882200] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3087910 has claimed it. 00:06:29.197 [2024-06-11 14:51:47.882248] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3088177) - No such process 00:06:29.765 ERROR: process (pid: 3088177) is no longer running 00:06:29.765 14:51:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.765 14:51:48 -- common/autotest_common.sh@852 -- # return 1 00:06:29.765 14:51:48 -- common/autotest_common.sh@643 -- # es=1 00:06:29.765 14:51:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:29.765 14:51:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:29.765 14:51:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:29.765 14:51:48 -- event/cpu_locks.sh@122 -- # locks_exist 3087910 00:06:29.765 14:51:48 -- event/cpu_locks.sh@22 -- # lslocks -p 3087910 00:06:29.765 14:51:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.024 lslocks: write error 00:06:30.024 14:51:48 -- event/cpu_locks.sh@124 -- # killprocess 3087910 00:06:30.024 14:51:48 -- common/autotest_common.sh@926 -- # '[' -z 3087910 ']' 00:06:30.024 14:51:48 -- common/autotest_common.sh@930 -- # kill -0 3087910 00:06:30.024 14:51:48 -- common/autotest_common.sh@931 -- # uname 00:06:30.024 14:51:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:30.024 14:51:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3087910 00:06:30.024 14:51:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:30.024 14:51:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:30.024 14:51:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3087910' 00:06:30.024 killing process with pid 3087910 00:06:30.024 14:51:48 -- common/autotest_common.sh@945 -- # kill 3087910 00:06:30.024 14:51:48 -- common/autotest_common.sh@950 -- # wait 3087910 00:06:30.592 00:06:30.592 real 0m2.457s 00:06:30.592 user 0m2.757s 00:06:30.592 sys 0m0.678s 00:06:30.592 14:51:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.592 14:51:49 -- common/autotest_common.sh@10 -- # set +x 00:06:30.592 ************************************ 00:06:30.592 END TEST locking_app_on_locked_coremask 00:06:30.592 ************************************ 00:06:30.592 14:51:49 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:30.592 14:51:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.592 14:51:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.592 14:51:49 -- common/autotest_common.sh@10 -- # set +x 00:06:30.592 ************************************ 00:06:30.592 START TEST locking_overlapped_coremask 00:06:30.592 ************************************ 00:06:30.592 14:51:49 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:30.592 14:51:49 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3088468 00:06:30.592 14:51:49 -- event/cpu_locks.sh@133 -- # waitforlisten 3088468 /var/tmp/spdk.sock 00:06:30.592 14:51:49 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:30.592 14:51:49 -- common/autotest_common.sh@819 -- # '[' -z 3088468 ']' 00:06:30.592 14:51:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.592 14:51:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.592 14:51:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.592 14:51:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.592 14:51:49 -- common/autotest_common.sh@10 -- # set +x 00:06:30.593 [2024-06-11 14:51:49.283894] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:30.593 [2024-06-11 14:51:49.283962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088468 ] 00:06:30.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.593 [2024-06-11 14:51:49.373166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.852 [2024-06-11 14:51:49.454855] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.852 [2024-06-11 14:51:49.455118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.852 [2024-06-11 14:51:49.455138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.852 [2024-06-11 14:51:49.455143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.420 14:51:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.420 14:51:50 -- common/autotest_common.sh@852 -- # return 0 00:06:31.420 14:51:50 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3088553 00:06:31.420 14:51:50 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:31.420 14:51:50 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3088553 /var/tmp/spdk2.sock 00:06:31.420 14:51:50 -- common/autotest_common.sh@640 -- # local es=0 00:06:31.420 14:51:50 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3088553 /var/tmp/spdk2.sock 00:06:31.420 14:51:50 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:31.420 14:51:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:31.420 14:51:50 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:31.420 14:51:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:31.420 14:51:50 -- common/autotest_common.sh@643 -- # waitforlisten 3088553 /var/tmp/spdk2.sock 00:06:31.420 14:51:50 -- common/autotest_common.sh@819 -- # '[' -z 3088553 ']' 00:06:31.420 14:51:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.420 14:51:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.420 14:51:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.420 14:51:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.420 14:51:50 -- common/autotest_common.sh@10 -- # set +x 00:06:31.681 [2024-06-11 14:51:50.270071] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:31.681 [2024-06-11 14:51:50.270135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088553 ] 00:06:31.681 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.681 [2024-06-11 14:51:50.365258] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3088468 has claimed it. 00:06:31.681 [2024-06-11 14:51:50.365296] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3088553) - No such process 00:06:32.249 ERROR: process (pid: 3088553) is no longer running 00:06:32.249 14:51:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.249 14:51:50 -- common/autotest_common.sh@852 -- # return 1 00:06:32.249 14:51:50 -- common/autotest_common.sh@643 -- # es=1 00:06:32.249 14:51:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:32.249 14:51:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:32.249 14:51:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:32.249 14:51:50 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:32.249 14:51:50 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.249 14:51:50 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.249 14:51:50 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.249 14:51:50 -- event/cpu_locks.sh@141 -- # killprocess 3088468 00:06:32.249 14:51:50 -- common/autotest_common.sh@926 -- # '[' -z 3088468 ']' 00:06:32.249 14:51:50 -- common/autotest_common.sh@930 -- # kill -0 3088468 00:06:32.249 14:51:50 -- common/autotest_common.sh@931 -- # uname 00:06:32.249 14:51:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.249 14:51:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3088468 00:06:32.249 14:51:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.249 14:51:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.249 14:51:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3088468' 00:06:32.249 killing process with pid 3088468 00:06:32.249 14:51:51 -- common/autotest_common.sh@945 -- # kill 3088468 00:06:32.249 14:51:51 -- common/autotest_common.sh@950 -- # wait 3088468 00:06:32.817 00:06:32.817 real 0m2.140s 00:06:32.817 user 0m6.092s 00:06:32.817 sys 0m0.462s 00:06:32.817 14:51:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.817 14:51:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.817 ************************************ 00:06:32.817 END TEST locking_overlapped_coremask 00:06:32.817 ************************************ 00:06:32.817 14:51:51 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:32.817 14:51:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.817 14:51:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.817 14:51:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.817 ************************************ 00:06:32.817 START TEST locking_overlapped_coremask_via_rpc 00:06:32.817 ************************************ 00:06:32.817 14:51:51 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:32.817 14:51:51 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3088780 00:06:32.817 14:51:51 -- event/cpu_locks.sh@149 -- # waitforlisten 3088780 /var/tmp/spdk.sock 00:06:32.817 14:51:51 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:32.817 14:51:51 -- common/autotest_common.sh@819 -- # '[' -z 3088780 ']' 00:06:32.817 14:51:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.817 14:51:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.817 14:51:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.817 14:51:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.817 14:51:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.817 [2024-06-11 14:51:51.462588] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:32.817 [2024-06-11 14:51:51.462649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088780 ] 00:06:32.817 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.817 [2024-06-11 14:51:51.550857] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.817 [2024-06-11 14:51:51.550889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.817 [2024-06-11 14:51:51.640240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.817 [2024-06-11 14:51:51.640422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.817 [2024-06-11 14:51:51.640535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.817 [2024-06-11 14:51:51.640536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.755 14:51:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.755 14:51:52 -- common/autotest_common.sh@852 -- # return 0 00:06:33.755 14:51:52 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3089044 00:06:33.755 14:51:52 -- event/cpu_locks.sh@153 -- # waitforlisten 3089044 /var/tmp/spdk2.sock 00:06:33.755 14:51:52 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:33.755 14:51:52 -- common/autotest_common.sh@819 -- # '[' -z 3089044 ']' 00:06:33.755 14:51:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.755 14:51:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.755 14:51:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.755 14:51:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.755 14:51:52 -- common/autotest_common.sh@10 -- # set +x 00:06:33.755 [2024-06-11 14:51:52.364718] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:33.755 [2024-06-11 14:51:52.364777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089044 ] 00:06:33.755 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.755 [2024-06-11 14:51:52.453251] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.755 [2024-06-11 14:51:52.453276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.755 [2024-06-11 14:51:52.587923] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.755 [2024-06-11 14:51:52.588091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.755 [2024-06-11 14:51:52.592068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.755 [2024-06-11 14:51:52.592070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.691 14:51:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.691 14:51:53 -- common/autotest_common.sh@852 -- # return 0 00:06:34.691 14:51:53 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.691 14:51:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:34.691 14:51:53 -- common/autotest_common.sh@10 -- # set +x 00:06:34.691 14:51:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:34.691 14:51:53 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.691 14:51:53 -- common/autotest_common.sh@640 -- # local es=0 00:06:34.691 14:51:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.691 14:51:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:34.691 14:51:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.691 14:51:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:34.691 14:51:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.691 14:51:53 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.691 14:51:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:34.691 14:51:53 -- common/autotest_common.sh@10 -- # set +x 00:06:34.691 [2024-06-11 14:51:53.315089] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3088780 has claimed it. 00:06:34.691 request: 00:06:34.691 { 00:06:34.691 "method": "framework_enable_cpumask_locks", 00:06:34.691 "req_id": 1 00:06:34.691 } 00:06:34.691 Got JSON-RPC error response 00:06:34.691 response: 00:06:34.691 { 00:06:34.691 "code": -32603, 00:06:34.691 "message": "Failed to claim CPU core: 2" 00:06:34.691 } 00:06:34.691 14:51:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:34.691 14:51:53 -- common/autotest_common.sh@643 -- # es=1 00:06:34.691 14:51:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:34.691 14:51:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:34.691 14:51:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:34.691 14:51:53 -- event/cpu_locks.sh@158 -- # waitforlisten 3088780 /var/tmp/spdk.sock 00:06:34.691 14:51:53 -- common/autotest_common.sh@819 -- # '[' -z 3088780 ']' 00:06:34.691 14:51:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.691 14:51:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.691 14:51:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.691 14:51:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.691 14:51:53 -- common/autotest_common.sh@10 -- # set +x 00:06:34.950 14:51:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.950 14:51:53 -- common/autotest_common.sh@852 -- # return 0 00:06:34.951 14:51:53 -- event/cpu_locks.sh@159 -- # waitforlisten 3089044 /var/tmp/spdk2.sock 00:06:34.951 14:51:53 -- common/autotest_common.sh@819 -- # '[' -z 3089044 ']' 00:06:34.951 14:51:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.951 14:51:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.951 14:51:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.951 14:51:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.951 14:51:53 -- common/autotest_common.sh@10 -- # set +x 00:06:35.210 14:51:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.210 14:51:53 -- common/autotest_common.sh@852 -- # return 0 00:06:35.210 14:51:53 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:35.210 14:51:53 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.210 14:51:53 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.210 14:51:53 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.210 00:06:35.211 real 0m2.416s 00:06:35.211 user 0m1.157s 00:06:35.211 sys 0m0.188s 00:06:35.211 14:51:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.211 14:51:53 -- common/autotest_common.sh@10 -- # set +x 00:06:35.211 ************************************ 00:06:35.211 END TEST locking_overlapped_coremask_via_rpc 00:06:35.211 ************************************ 00:06:35.211 14:51:53 -- event/cpu_locks.sh@174 -- # cleanup 00:06:35.211 14:51:53 -- event/cpu_locks.sh@15 -- # [[ -z 3088780 ]] 00:06:35.211 14:51:53 -- event/cpu_locks.sh@15 -- # killprocess 3088780 00:06:35.211 14:51:53 -- common/autotest_common.sh@926 -- # '[' -z 3088780 ']' 00:06:35.211 14:51:53 -- common/autotest_common.sh@930 -- # kill -0 3088780 00:06:35.211 14:51:53 -- common/autotest_common.sh@931 -- # uname 00:06:35.211 14:51:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.211 14:51:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3088780 00:06:35.211 14:51:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.211 14:51:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.211 14:51:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3088780' 00:06:35.211 killing process with pid 3088780 00:06:35.211 14:51:53 -- common/autotest_common.sh@945 -- # kill 3088780 00:06:35.211 14:51:53 -- common/autotest_common.sh@950 -- # wait 3088780 00:06:35.470 14:51:54 -- event/cpu_locks.sh@16 -- # [[ -z 3089044 ]] 00:06:35.470 14:51:54 -- event/cpu_locks.sh@16 -- # killprocess 3089044 00:06:35.470 14:51:54 -- common/autotest_common.sh@926 -- # '[' -z 3089044 ']' 00:06:35.470 14:51:54 -- common/autotest_common.sh@930 -- # kill -0 3089044 00:06:35.470 14:51:54 -- common/autotest_common.sh@931 -- # uname 00:06:35.470 14:51:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.470 14:51:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3089044 00:06:35.729 14:51:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:35.729 14:51:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:35.729 14:51:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3089044' 00:06:35.729 killing process with pid 3089044 00:06:35.729 14:51:54 -- common/autotest_common.sh@945 -- # kill 3089044 00:06:35.729 14:51:54 -- common/autotest_common.sh@950 -- # wait 3089044 00:06:35.989 14:51:54 -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.989 14:51:54 -- event/cpu_locks.sh@1 -- # cleanup 00:06:35.989 14:51:54 -- event/cpu_locks.sh@15 -- # [[ -z 3088780 ]] 00:06:35.989 14:51:54 -- event/cpu_locks.sh@15 -- # killprocess 3088780 00:06:35.989 14:51:54 -- common/autotest_common.sh@926 -- # '[' -z 3088780 ']' 00:06:35.989 14:51:54 -- common/autotest_common.sh@930 -- # kill -0 3088780 00:06:35.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3088780) - No such process 00:06:35.989 14:51:54 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3088780 is not found' 00:06:35.989 Process with pid 3088780 is not found 00:06:35.989 14:51:54 -- event/cpu_locks.sh@16 -- # [[ -z 3089044 ]] 00:06:35.989 14:51:54 -- event/cpu_locks.sh@16 -- # killprocess 3089044 00:06:35.989 14:51:54 -- common/autotest_common.sh@926 -- # '[' -z 3089044 ']' 00:06:35.989 14:51:54 -- common/autotest_common.sh@930 -- # kill -0 3089044 00:06:35.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3089044) - No such process 00:06:35.989 14:51:54 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3089044 is not found' 00:06:35.989 Process with pid 3089044 is not found 00:06:35.989 14:51:54 -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.989 00:06:35.989 real 0m18.659s 00:06:35.989 user 0m33.576s 00:06:35.989 sys 0m5.294s 00:06:35.989 14:51:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.989 14:51:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.989 ************************************ 00:06:35.989 END TEST cpu_locks 00:06:35.989 ************************************ 00:06:35.989 00:06:35.989 real 0m46.149s 00:06:35.989 user 1m30.313s 00:06:35.989 sys 0m8.974s 00:06:35.989 14:51:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.989 14:51:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.989 ************************************ 00:06:35.989 END TEST event 00:06:35.989 ************************************ 00:06:35.989 14:51:54 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:35.989 14:51:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.989 14:51:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.989 14:51:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.989 ************************************ 00:06:35.989 START TEST thread 00:06:35.989 ************************************ 00:06:35.989 14:51:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:35.989 * Looking for test storage... 00:06:35.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:35.989 14:51:54 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.989 14:51:54 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:35.989 14:51:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.989 14:51:54 -- common/autotest_common.sh@10 -- # set +x 00:06:36.249 ************************************ 00:06:36.249 START TEST thread_poller_perf 00:06:36.249 ************************************ 00:06:36.249 14:51:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.249 [2024-06-11 14:51:54.855661] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:36.249 [2024-06-11 14:51:54.855736] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089668 ] 00:06:36.249 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.249 [2024-06-11 14:51:54.946467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.249 [2024-06-11 14:51:55.032680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.249 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:37.628 ====================================== 00:06:37.628 busy:2216144316 (cyc) 00:06:37.628 total_run_count: 247000 00:06:37.628 tsc_hz: 2200000000 (cyc) 00:06:37.628 ====================================== 00:06:37.628 poller_cost: 8972 (cyc), 4078 (nsec) 00:06:37.628 00:06:37.628 real 0m1.305s 00:06:37.628 user 0m1.202s 00:06:37.628 sys 0m0.097s 00:06:37.628 14:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.628 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.628 ************************************ 00:06:37.628 END TEST thread_poller_perf 00:06:37.628 ************************************ 00:06:37.628 14:51:56 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.628 14:51:56 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:37.628 14:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.628 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.628 ************************************ 00:06:37.628 START TEST thread_poller_perf 00:06:37.628 ************************************ 00:06:37.628 14:51:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.628 [2024-06-11 14:51:56.200653] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:37.628 [2024-06-11 14:51:56.200738] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089896 ] 00:06:37.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.628 [2024-06-11 14:51:56.283348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.628 [2024-06-11 14:51:56.366216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.628 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:39.008 ====================================== 00:06:39.008 busy:2203089578 (cyc) 00:06:39.008 total_run_count: 3407000 00:06:39.008 tsc_hz: 2200000000 (cyc) 00:06:39.008 ====================================== 00:06:39.008 poller_cost: 646 (cyc), 293 (nsec) 00:06:39.008 00:06:39.008 real 0m1.280s 00:06:39.008 user 0m1.186s 00:06:39.008 sys 0m0.088s 00:06:39.008 14:51:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.008 14:51:57 -- common/autotest_common.sh@10 -- # set +x 00:06:39.008 ************************************ 00:06:39.008 END TEST thread_poller_perf 00:06:39.008 ************************************ 00:06:39.008 14:51:57 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:39.008 00:06:39.008 real 0m2.754s 00:06:39.008 user 0m2.447s 00:06:39.008 sys 0m0.314s 00:06:39.008 14:51:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.008 14:51:57 -- common/autotest_common.sh@10 -- # set +x 00:06:39.008 ************************************ 00:06:39.008 END TEST thread 00:06:39.008 ************************************ 00:06:39.008 14:51:57 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:39.008 14:51:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.008 14:51:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.008 14:51:57 -- common/autotest_common.sh@10 -- # set +x 00:06:39.008 ************************************ 00:06:39.008 START TEST accel 00:06:39.008 ************************************ 00:06:39.008 14:51:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:39.008 * Looking for test storage... 00:06:39.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:39.008 14:51:57 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:39.008 14:51:57 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:39.008 14:51:57 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:39.008 14:51:57 -- accel/accel.sh@59 -- # spdk_tgt_pid=3090230 00:06:39.008 14:51:57 -- accel/accel.sh@60 -- # waitforlisten 3090230 00:06:39.008 14:51:57 -- common/autotest_common.sh@819 -- # '[' -z 3090230 ']' 00:06:39.008 14:51:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.008 14:51:57 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:39.008 14:51:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.008 14:51:57 -- accel/accel.sh@58 -- # build_accel_config 00:06:39.008 14:51:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.008 14:51:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.008 14:51:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.008 14:51:57 -- common/autotest_common.sh@10 -- # set +x 00:06:39.008 14:51:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.008 14:51:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.008 14:51:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.008 14:51:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.008 14:51:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.008 14:51:57 -- accel/accel.sh@42 -- # jq -r . 00:06:39.008 [2024-06-11 14:51:57.676534] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:39.008 [2024-06-11 14:51:57.676595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090230 ] 00:06:39.008 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.008 [2024-06-11 14:51:57.763622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.008 [2024-06-11 14:51:57.849481] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.008 [2024-06-11 14:51:57.849626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.946 14:51:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.946 14:51:58 -- common/autotest_common.sh@852 -- # return 0 00:06:39.946 14:51:58 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:39.946 14:51:58 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:39.946 14:51:58 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:39.946 14:51:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.946 14:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:39.946 14:51:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.946 14:51:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.946 14:51:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.946 14:51:58 -- accel/accel.sh@67 -- # killprocess 3090230 00:06:39.946 14:51:58 -- common/autotest_common.sh@926 -- # '[' -z 3090230 ']' 00:06:39.946 14:51:58 -- common/autotest_common.sh@930 -- # kill -0 3090230 00:06:39.946 14:51:58 -- common/autotest_common.sh@931 -- # uname 00:06:39.946 14:51:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.946 14:51:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3090230 00:06:39.946 14:51:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.946 14:51:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.946 14:51:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3090230' 00:06:39.946 killing process with pid 3090230 00:06:39.946 14:51:58 -- common/autotest_common.sh@945 -- # kill 3090230 00:06:39.946 14:51:58 -- common/autotest_common.sh@950 -- # wait 3090230 00:06:40.513 14:51:59 -- accel/accel.sh@68 -- # trap - ERR 00:06:40.513 14:51:59 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:40.513 14:51:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:40.513 14:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.513 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.513 14:51:59 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:40.513 14:51:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:40.513 14:51:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.513 14:51:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.513 14:51:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.513 14:51:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.513 14:51:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.513 14:51:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.513 14:51:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.513 14:51:59 -- accel/accel.sh@42 -- # jq -r . 00:06:40.513 14:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.513 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.513 14:51:59 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:40.513 14:51:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:40.513 14:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.513 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.513 ************************************ 00:06:40.513 START TEST accel_missing_filename 00:06:40.513 ************************************ 00:06:40.513 14:51:59 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:40.513 14:51:59 -- common/autotest_common.sh@640 -- # local es=0 00:06:40.513 14:51:59 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:40.513 14:51:59 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:40.513 14:51:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.513 14:51:59 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:40.513 14:51:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.513 14:51:59 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:40.513 14:51:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:40.513 14:51:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.513 14:51:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.513 14:51:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.513 14:51:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.513 14:51:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.513 14:51:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.513 14:51:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.513 14:51:59 -- accel/accel.sh@42 -- # jq -r . 00:06:40.514 [2024-06-11 14:51:59.155291] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:40.514 [2024-06-11 14:51:59.155364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090563 ] 00:06:40.514 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.514 [2024-06-11 14:51:59.234913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.514 [2024-06-11 14:51:59.317455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.772 [2024-06-11 14:51:59.362401] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.772 [2024-06-11 14:51:59.425590] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:40.772 A filename is required. 00:06:40.772 14:51:59 -- common/autotest_common.sh@643 -- # es=234 00:06:40.772 14:51:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:40.772 14:51:59 -- common/autotest_common.sh@652 -- # es=106 00:06:40.772 14:51:59 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:40.772 14:51:59 -- common/autotest_common.sh@660 -- # es=1 00:06:40.772 14:51:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:40.772 00:06:40.772 real 0m0.401s 00:06:40.772 user 0m0.306s 00:06:40.772 sys 0m0.134s 00:06:40.772 14:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.772 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.772 ************************************ 00:06:40.772 END TEST accel_missing_filename 00:06:40.772 ************************************ 00:06:40.772 14:51:59 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.772 14:51:59 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:40.772 14:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.772 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.772 ************************************ 00:06:40.772 START TEST accel_compress_verify 00:06:40.772 ************************************ 00:06:40.772 14:51:59 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.772 14:51:59 -- common/autotest_common.sh@640 -- # local es=0 00:06:40.772 14:51:59 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.772 14:51:59 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:40.772 14:51:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.772 14:51:59 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:40.772 14:51:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.772 14:51:59 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.772 14:51:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.772 14:51:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.772 14:51:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.772 14:51:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.772 14:51:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.772 14:51:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.772 14:51:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.772 14:51:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.772 14:51:59 -- accel/accel.sh@42 -- # jq -r . 00:06:40.772 [2024-06-11 14:51:59.595664] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:40.772 [2024-06-11 14:51:59.595738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090598 ] 00:06:41.031 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.031 [2024-06-11 14:51:59.685275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.031 [2024-06-11 14:51:59.769961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.031 [2024-06-11 14:51:59.814896] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.290 [2024-06-11 14:51:59.878325] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:41.290 00:06:41.290 Compression does not support the verify option, aborting. 00:06:41.290 14:51:59 -- common/autotest_common.sh@643 -- # es=161 00:06:41.290 14:51:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:41.290 14:51:59 -- common/autotest_common.sh@652 -- # es=33 00:06:41.290 14:51:59 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:41.291 14:51:59 -- common/autotest_common.sh@660 -- # es=1 00:06:41.291 14:51:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:41.291 00:06:41.291 real 0m0.415s 00:06:41.291 user 0m0.308s 00:06:41.291 sys 0m0.149s 00:06:41.291 14:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.291 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:41.291 ************************************ 00:06:41.291 END TEST accel_compress_verify 00:06:41.291 ************************************ 00:06:41.291 14:52:00 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:41.291 14:52:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:41.291 14:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.291 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.291 ************************************ 00:06:41.291 START TEST accel_wrong_workload 00:06:41.291 ************************************ 00:06:41.291 14:52:00 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:41.291 14:52:00 -- common/autotest_common.sh@640 -- # local es=0 00:06:41.291 14:52:00 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:41.291 14:52:00 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:41.291 14:52:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.291 14:52:00 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:41.291 14:52:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.291 14:52:00 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:41.291 14:52:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:41.291 14:52:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.291 14:52:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.291 14:52:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.291 14:52:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.291 14:52:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.291 14:52:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.291 14:52:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.291 14:52:00 -- accel/accel.sh@42 -- # jq -r . 00:06:41.291 Unsupported workload type: foobar 00:06:41.291 [2024-06-11 14:52:00.047713] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:41.291 accel_perf options: 00:06:41.291 [-h help message] 00:06:41.291 [-q queue depth per core] 00:06:41.291 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:41.291 [-T number of threads per core 00:06:41.291 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:41.291 [-t time in seconds] 00:06:41.291 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:41.291 [ dif_verify, , dif_generate, dif_generate_copy 00:06:41.291 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:41.291 [-l for compress/decompress workloads, name of uncompressed input file 00:06:41.291 [-S for crc32c workload, use this seed value (default 0) 00:06:41.291 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:41.291 [-f for fill workload, use this BYTE value (default 255) 00:06:41.291 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:41.291 [-y verify result if this switch is on] 00:06:41.291 [-a tasks to allocate per core (default: same value as -q)] 00:06:41.291 Can be used to spread operations across a wider range of memory. 00:06:41.291 14:52:00 -- common/autotest_common.sh@643 -- # es=1 00:06:41.291 14:52:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:41.291 14:52:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:41.291 14:52:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:41.291 00:06:41.291 real 0m0.033s 00:06:41.291 user 0m0.018s 00:06:41.291 sys 0m0.016s 00:06:41.291 14:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.291 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.291 ************************************ 00:06:41.291 END TEST accel_wrong_workload 00:06:41.291 ************************************ 00:06:41.291 Error: writing output failed: Broken pipe 00:06:41.291 14:52:00 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:41.291 14:52:00 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:41.291 14:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.291 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.291 ************************************ 00:06:41.291 START TEST accel_negative_buffers 00:06:41.291 ************************************ 00:06:41.291 14:52:00 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:41.291 14:52:00 -- common/autotest_common.sh@640 -- # local es=0 00:06:41.291 14:52:00 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:41.291 14:52:00 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:41.291 14:52:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.291 14:52:00 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:41.291 14:52:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.291 14:52:00 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:41.291 14:52:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:41.291 14:52:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.291 14:52:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.291 14:52:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.291 14:52:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.291 14:52:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.291 14:52:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.291 14:52:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.291 14:52:00 -- accel/accel.sh@42 -- # jq -r . 00:06:41.291 -x option must be non-negative. 00:06:41.291 [2024-06-11 14:52:00.117285] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:41.291 accel_perf options: 00:06:41.291 [-h help message] 00:06:41.291 [-q queue depth per core] 00:06:41.291 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:41.291 [-T number of threads per core 00:06:41.291 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:41.291 [-t time in seconds] 00:06:41.291 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:41.291 [ dif_verify, , dif_generate, dif_generate_copy 00:06:41.291 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:41.291 [-l for compress/decompress workloads, name of uncompressed input file 00:06:41.291 [-S for crc32c workload, use this seed value (default 0) 00:06:41.291 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:41.291 [-f for fill workload, use this BYTE value (default 255) 00:06:41.291 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:41.291 [-y verify result if this switch is on] 00:06:41.291 [-a tasks to allocate per core (default: same value as -q)] 00:06:41.291 Can be used to spread operations across a wider range of memory. 00:06:41.291 14:52:00 -- common/autotest_common.sh@643 -- # es=1 00:06:41.291 14:52:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:41.291 14:52:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:41.291 14:52:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:41.291 00:06:41.291 real 0m0.031s 00:06:41.291 user 0m0.019s 00:06:41.291 sys 0m0.012s 00:06:41.291 14:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.291 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.291 ************************************ 00:06:41.291 END TEST accel_negative_buffers 00:06:41.291 ************************************ 00:06:41.551 Error: writing output failed: Broken pipe 00:06:41.551 14:52:00 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:41.551 14:52:00 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:41.551 14:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.551 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.551 ************************************ 00:06:41.551 START TEST accel_crc32c 00:06:41.551 ************************************ 00:06:41.551 14:52:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:41.551 14:52:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.551 14:52:00 -- accel/accel.sh@17 -- # local accel_module 00:06:41.551 14:52:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:41.551 14:52:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:41.551 14:52:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.551 14:52:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.551 14:52:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.551 14:52:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.551 14:52:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.551 14:52:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.551 14:52:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.551 14:52:00 -- accel/accel.sh@42 -- # jq -r . 00:06:41.551 [2024-06-11 14:52:00.187743] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:41.551 [2024-06-11 14:52:00.187805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090666 ] 00:06:41.551 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.551 [2024-06-11 14:52:00.277569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.551 [2024-06-11 14:52:00.361918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.929 14:52:01 -- accel/accel.sh@18 -- # out=' 00:06:42.929 SPDK Configuration: 00:06:42.929 Core mask: 0x1 00:06:42.929 00:06:42.929 Accel Perf Configuration: 00:06:42.929 Workload Type: crc32c 00:06:42.929 CRC-32C seed: 32 00:06:42.929 Transfer size: 4096 bytes 00:06:42.929 Vector count 1 00:06:42.929 Module: software 00:06:42.929 Queue depth: 32 00:06:42.929 Allocate depth: 32 00:06:42.929 # threads/core: 1 00:06:42.929 Run time: 1 seconds 00:06:42.929 Verify: Yes 00:06:42.929 00:06:42.929 Running for 1 seconds... 00:06:42.929 00:06:42.929 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.929 ------------------------------------------------------------------------------------ 00:06:42.929 0,0 354656/s 1385 MiB/s 0 0 00:06:42.929 ==================================================================================== 00:06:42.929 Total 354656/s 1385 MiB/s 0 0' 00:06:42.929 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.929 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.929 14:52:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:42.929 14:52:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:42.929 14:52:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.929 14:52:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.929 14:52:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.929 14:52:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.929 14:52:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.929 14:52:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.929 14:52:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.929 14:52:01 -- accel/accel.sh@42 -- # jq -r . 00:06:42.929 [2024-06-11 14:52:01.600008] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:42.929 [2024-06-11 14:52:01.600080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090925 ] 00:06:42.929 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.929 [2024-06-11 14:52:01.689004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.188 [2024-06-11 14:52:01.772552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.188 14:52:01 -- accel/accel.sh@21 -- # val= 00:06:43.188 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.188 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.188 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.188 14:52:01 -- accel/accel.sh@21 -- # val= 00:06:43.188 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.188 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.188 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.188 14:52:01 -- accel/accel.sh@21 -- # val=0x1 00:06:43.188 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.188 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val= 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val= 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val=crc32c 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val=32 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val= 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val=software 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val=32 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val=32 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val=1 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val=Yes 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val= 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.189 14:52:01 -- accel/accel.sh@21 -- # val= 00:06:43.189 14:52:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # IFS=: 00:06:43.189 14:52:01 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 14:52:02 -- accel/accel.sh@21 -- # val= 00:06:44.618 14:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 14:52:02 -- accel/accel.sh@21 -- # val= 00:06:44.618 14:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 14:52:02 -- accel/accel.sh@21 -- # val= 00:06:44.618 14:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 14:52:02 -- accel/accel.sh@21 -- # val= 00:06:44.618 14:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 14:52:02 -- accel/accel.sh@21 -- # val= 00:06:44.618 14:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 14:52:02 -- accel/accel.sh@21 -- # val= 00:06:44.618 14:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 14:52:02 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 14:52:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.618 14:52:02 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:44.618 14:52:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.618 00:06:44.618 real 0m2.829s 00:06:44.618 user 0m2.547s 00:06:44.618 sys 0m0.287s 00:06:44.618 14:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.618 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:06:44.618 ************************************ 00:06:44.618 END TEST accel_crc32c 00:06:44.618 ************************************ 00:06:44.618 14:52:03 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:44.618 14:52:03 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:44.618 14:52:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.618 14:52:03 -- common/autotest_common.sh@10 -- # set +x 00:06:44.618 ************************************ 00:06:44.618 START TEST accel_crc32c_C2 00:06:44.618 ************************************ 00:06:44.618 14:52:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:44.618 14:52:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.618 14:52:03 -- accel/accel.sh@17 -- # local accel_module 00:06:44.618 14:52:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:44.618 14:52:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:44.618 14:52:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.618 14:52:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.618 14:52:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.618 14:52:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.618 14:52:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.618 14:52:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.618 14:52:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.618 14:52:03 -- accel/accel.sh@42 -- # jq -r . 00:06:44.618 [2024-06-11 14:52:03.057504] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:44.618 [2024-06-11 14:52:03.057582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091215 ] 00:06:44.618 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.618 [2024-06-11 14:52:03.147886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.618 [2024-06-11 14:52:03.231383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.615 14:52:04 -- accel/accel.sh@18 -- # out=' 00:06:45.615 SPDK Configuration: 00:06:45.615 Core mask: 0x1 00:06:45.615 00:06:45.615 Accel Perf Configuration: 00:06:45.615 Workload Type: crc32c 00:06:45.615 CRC-32C seed: 0 00:06:45.615 Transfer size: 4096 bytes 00:06:45.615 Vector count 2 00:06:45.615 Module: software 00:06:45.615 Queue depth: 32 00:06:45.615 Allocate depth: 32 00:06:45.615 # threads/core: 1 00:06:45.615 Run time: 1 seconds 00:06:45.615 Verify: Yes 00:06:45.615 00:06:45.615 Running for 1 seconds... 00:06:45.615 00:06:45.615 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.615 ------------------------------------------------------------------------------------ 00:06:45.615 0,0 281216/s 2197 MiB/s 0 0 00:06:45.615 ==================================================================================== 00:06:45.615 Total 281216/s 1098 MiB/s 0 0' 00:06:45.615 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.615 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.615 14:52:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:45.615 14:52:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:45.615 14:52:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.615 14:52:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.615 14:52:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.615 14:52:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.615 14:52:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.615 14:52:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.615 14:52:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.615 14:52:04 -- accel/accel.sh@42 -- # jq -r . 00:06:45.874 [2024-06-11 14:52:04.473479] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:45.875 [2024-06-11 14:52:04.473556] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091483 ] 00:06:45.875 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.875 [2024-06-11 14:52:04.563557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.875 [2024-06-11 14:52:04.646431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val= 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val= 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val=0x1 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val= 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val= 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val=crc32c 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val=0 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val= 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val=software 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val=32 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val=32 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val=1 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val=Yes 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val= 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.875 14:52:04 -- accel/accel.sh@21 -- # val= 00:06:45.875 14:52:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.875 14:52:04 -- accel/accel.sh@20 -- # read -r var val 00:06:47.252 14:52:05 -- accel/accel.sh@21 -- # val= 00:06:47.252 14:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:47.252 14:52:05 -- accel/accel.sh@21 -- # val= 00:06:47.252 14:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:47.252 14:52:05 -- accel/accel.sh@21 -- # val= 00:06:47.252 14:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:47.252 14:52:05 -- accel/accel.sh@21 -- # val= 00:06:47.252 14:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:47.252 14:52:05 -- accel/accel.sh@21 -- # val= 00:06:47.252 14:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:47.252 14:52:05 -- accel/accel.sh@21 -- # val= 00:06:47.252 14:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # IFS=: 00:06:47.252 14:52:05 -- accel/accel.sh@20 -- # read -r var val 00:06:47.252 14:52:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.252 14:52:05 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:47.252 14:52:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.252 00:06:47.252 real 0m2.835s 00:06:47.252 user 0m2.555s 00:06:47.252 sys 0m0.285s 00:06:47.252 14:52:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.252 14:52:05 -- common/autotest_common.sh@10 -- # set +x 00:06:47.252 ************************************ 00:06:47.252 END TEST accel_crc32c_C2 00:06:47.252 ************************************ 00:06:47.252 14:52:05 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:47.252 14:52:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:47.252 14:52:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.252 14:52:05 -- common/autotest_common.sh@10 -- # set +x 00:06:47.252 ************************************ 00:06:47.252 START TEST accel_copy 00:06:47.252 ************************************ 00:06:47.252 14:52:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:47.252 14:52:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.252 14:52:05 -- accel/accel.sh@17 -- # local accel_module 00:06:47.252 14:52:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:47.252 14:52:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:47.252 14:52:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.252 14:52:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.252 14:52:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.252 14:52:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.252 14:52:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.252 14:52:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.252 14:52:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.252 14:52:05 -- accel/accel.sh@42 -- # jq -r . 00:06:47.252 [2024-06-11 14:52:05.931398] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:47.252 [2024-06-11 14:52:05.931471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091770 ] 00:06:47.252 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.252 [2024-06-11 14:52:06.020644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.511 [2024-06-11 14:52:06.104767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.888 14:52:07 -- accel/accel.sh@18 -- # out=' 00:06:48.888 SPDK Configuration: 00:06:48.888 Core mask: 0x1 00:06:48.888 00:06:48.888 Accel Perf Configuration: 00:06:48.888 Workload Type: copy 00:06:48.888 Transfer size: 4096 bytes 00:06:48.888 Vector count 1 00:06:48.888 Module: software 00:06:48.888 Queue depth: 32 00:06:48.888 Allocate depth: 32 00:06:48.888 # threads/core: 1 00:06:48.888 Run time: 1 seconds 00:06:48.888 Verify: Yes 00:06:48.888 00:06:48.888 Running for 1 seconds... 00:06:48.888 00:06:48.888 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.888 ------------------------------------------------------------------------------------ 00:06:48.888 0,0 265280/s 1036 MiB/s 0 0 00:06:48.888 ==================================================================================== 00:06:48.888 Total 265280/s 1036 MiB/s 0 0' 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:48.888 14:52:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:48.888 14:52:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.888 14:52:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.888 14:52:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.888 14:52:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.888 14:52:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.888 14:52:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.888 14:52:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.888 14:52:07 -- accel/accel.sh@42 -- # jq -r . 00:06:48.888 [2024-06-11 14:52:07.342659] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:48.888 [2024-06-11 14:52:07.342722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092037 ] 00:06:48.888 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.888 [2024-06-11 14:52:07.430175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.888 [2024-06-11 14:52:07.512707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val= 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val= 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val=0x1 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val= 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val= 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val=copy 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val= 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val=software 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val=32 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val=32 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val=1 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val=Yes 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val= 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.888 14:52:07 -- accel/accel.sh@21 -- # val= 00:06:48.888 14:52:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.888 14:52:07 -- accel/accel.sh@20 -- # read -r var val 00:06:50.267 14:52:08 -- accel/accel.sh@21 -- # val= 00:06:50.267 14:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # IFS=: 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # read -r var val 00:06:50.267 14:52:08 -- accel/accel.sh@21 -- # val= 00:06:50.267 14:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # IFS=: 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # read -r var val 00:06:50.267 14:52:08 -- accel/accel.sh@21 -- # val= 00:06:50.267 14:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # IFS=: 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # read -r var val 00:06:50.267 14:52:08 -- accel/accel.sh@21 -- # val= 00:06:50.267 14:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # IFS=: 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # read -r var val 00:06:50.267 14:52:08 -- accel/accel.sh@21 -- # val= 00:06:50.267 14:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # IFS=: 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # read -r var val 00:06:50.267 14:52:08 -- accel/accel.sh@21 -- # val= 00:06:50.267 14:52:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # IFS=: 00:06:50.267 14:52:08 -- accel/accel.sh@20 -- # read -r var val 00:06:50.267 14:52:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.267 14:52:08 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:50.267 14:52:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.267 00:06:50.267 real 0m2.827s 00:06:50.267 user 0m2.549s 00:06:50.267 sys 0m0.283s 00:06:50.267 14:52:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.267 14:52:08 -- common/autotest_common.sh@10 -- # set +x 00:06:50.267 ************************************ 00:06:50.267 END TEST accel_copy 00:06:50.267 ************************************ 00:06:50.267 14:52:08 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.267 14:52:08 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:50.267 14:52:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.267 14:52:08 -- common/autotest_common.sh@10 -- # set +x 00:06:50.267 ************************************ 00:06:50.267 START TEST accel_fill 00:06:50.267 ************************************ 00:06:50.267 14:52:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.267 14:52:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.267 14:52:08 -- accel/accel.sh@17 -- # local accel_module 00:06:50.267 14:52:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.267 14:52:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.267 14:52:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.267 14:52:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.267 14:52:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.267 14:52:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.267 14:52:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.267 14:52:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.267 14:52:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.267 14:52:08 -- accel/accel.sh@42 -- # jq -r . 00:06:50.267 [2024-06-11 14:52:08.799296] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:50.267 [2024-06-11 14:52:08.799372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092324 ] 00:06:50.267 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.267 [2024-06-11 14:52:08.887781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.267 [2024-06-11 14:52:08.971536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.646 14:52:10 -- accel/accel.sh@18 -- # out=' 00:06:51.646 SPDK Configuration: 00:06:51.646 Core mask: 0x1 00:06:51.646 00:06:51.646 Accel Perf Configuration: 00:06:51.646 Workload Type: fill 00:06:51.646 Fill pattern: 0x80 00:06:51.646 Transfer size: 4096 bytes 00:06:51.646 Vector count 1 00:06:51.646 Module: software 00:06:51.646 Queue depth: 64 00:06:51.646 Allocate depth: 64 00:06:51.646 # threads/core: 1 00:06:51.646 Run time: 1 seconds 00:06:51.646 Verify: Yes 00:06:51.646 00:06:51.646 Running for 1 seconds... 00:06:51.646 00:06:51.646 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.646 ------------------------------------------------------------------------------------ 00:06:51.646 0,0 412096/s 1609 MiB/s 0 0 00:06:51.646 ==================================================================================== 00:06:51.646 Total 412096/s 1609 MiB/s 0 0' 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.646 14:52:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.646 14:52:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.646 14:52:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.646 14:52:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.646 14:52:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.646 14:52:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.646 14:52:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.646 14:52:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.646 14:52:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.646 14:52:10 -- accel/accel.sh@42 -- # jq -r . 00:06:51.646 [2024-06-11 14:52:10.215089] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:51.646 [2024-06-11 14:52:10.215162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092588 ] 00:06:51.646 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.646 [2024-06-11 14:52:10.304170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.646 [2024-06-11 14:52:10.390752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.646 14:52:10 -- accel/accel.sh@21 -- # val= 00:06:51.646 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.646 14:52:10 -- accel/accel.sh@21 -- # val= 00:06:51.646 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.646 14:52:10 -- accel/accel.sh@21 -- # val=0x1 00:06:51.646 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.646 14:52:10 -- accel/accel.sh@21 -- # val= 00:06:51.646 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.646 14:52:10 -- accel/accel.sh@21 -- # val= 00:06:51.646 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.646 14:52:10 -- accel/accel.sh@21 -- # val=fill 00:06:51.646 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.646 14:52:10 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.646 14:52:10 -- accel/accel.sh@21 -- # val=0x80 00:06:51.646 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.646 14:52:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.646 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.646 14:52:10 -- accel/accel.sh@21 -- # val= 00:06:51.646 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.646 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.646 14:52:10 -- accel/accel.sh@21 -- # val=software 00:06:51.647 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.647 14:52:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.647 14:52:10 -- accel/accel.sh@21 -- # val=64 00:06:51.647 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.647 14:52:10 -- accel/accel.sh@21 -- # val=64 00:06:51.647 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.647 14:52:10 -- accel/accel.sh@21 -- # val=1 00:06:51.647 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.647 14:52:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.647 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.647 14:52:10 -- accel/accel.sh@21 -- # val=Yes 00:06:51.647 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.647 14:52:10 -- accel/accel.sh@21 -- # val= 00:06:51.647 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.647 14:52:10 -- accel/accel.sh@21 -- # val= 00:06:51.647 14:52:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.647 14:52:10 -- accel/accel.sh@20 -- # read -r var val 00:06:53.024 14:52:11 -- accel/accel.sh@21 -- # val= 00:06:53.024 14:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:53.024 14:52:11 -- accel/accel.sh@21 -- # val= 00:06:53.024 14:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:53.024 14:52:11 -- accel/accel.sh@21 -- # val= 00:06:53.024 14:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:53.024 14:52:11 -- accel/accel.sh@21 -- # val= 00:06:53.024 14:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:53.024 14:52:11 -- accel/accel.sh@21 -- # val= 00:06:53.024 14:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:53.024 14:52:11 -- accel/accel.sh@21 -- # val= 00:06:53.024 14:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # IFS=: 00:06:53.024 14:52:11 -- accel/accel.sh@20 -- # read -r var val 00:06:53.024 14:52:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.024 14:52:11 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:53.024 14:52:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.024 00:06:53.024 real 0m2.838s 00:06:53.024 user 0m2.561s 00:06:53.024 sys 0m0.282s 00:06:53.024 14:52:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.024 14:52:11 -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 ************************************ 00:06:53.024 END TEST accel_fill 00:06:53.024 ************************************ 00:06:53.024 14:52:11 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:53.024 14:52:11 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:53.024 14:52:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.024 14:52:11 -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 ************************************ 00:06:53.024 START TEST accel_copy_crc32c 00:06:53.024 ************************************ 00:06:53.025 14:52:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:53.025 14:52:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.025 14:52:11 -- accel/accel.sh@17 -- # local accel_module 00:06:53.025 14:52:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:53.025 14:52:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:53.025 14:52:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.025 14:52:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.025 14:52:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.025 14:52:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.025 14:52:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.025 14:52:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.025 14:52:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.025 14:52:11 -- accel/accel.sh@42 -- # jq -r . 00:06:53.025 [2024-06-11 14:52:11.676413] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:53.025 [2024-06-11 14:52:11.676487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092872 ] 00:06:53.025 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.025 [2024-06-11 14:52:11.765124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.025 [2024-06-11 14:52:11.848785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.403 14:52:13 -- accel/accel.sh@18 -- # out=' 00:06:54.403 SPDK Configuration: 00:06:54.403 Core mask: 0x1 00:06:54.403 00:06:54.403 Accel Perf Configuration: 00:06:54.403 Workload Type: copy_crc32c 00:06:54.403 CRC-32C seed: 0 00:06:54.403 Vector size: 4096 bytes 00:06:54.403 Transfer size: 4096 bytes 00:06:54.403 Vector count 1 00:06:54.403 Module: software 00:06:54.403 Queue depth: 32 00:06:54.403 Allocate depth: 32 00:06:54.403 # threads/core: 1 00:06:54.403 Run time: 1 seconds 00:06:54.403 Verify: Yes 00:06:54.403 00:06:54.403 Running for 1 seconds... 00:06:54.403 00:06:54.403 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.403 ------------------------------------------------------------------------------------ 00:06:54.403 0,0 203296/s 794 MiB/s 0 0 00:06:54.403 ==================================================================================== 00:06:54.403 Total 203296/s 794 MiB/s 0 0' 00:06:54.403 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.403 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.403 14:52:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:54.403 14:52:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:54.403 14:52:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.403 14:52:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.403 14:52:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.403 14:52:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.403 14:52:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.403 14:52:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.403 14:52:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.403 14:52:13 -- accel/accel.sh@42 -- # jq -r . 00:06:54.403 [2024-06-11 14:52:13.087431] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:54.403 [2024-06-11 14:52:13.087491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093142 ] 00:06:54.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.403 [2024-06-11 14:52:13.173114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.663 [2024-06-11 14:52:13.256323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.663 14:52:13 -- accel/accel.sh@21 -- # val= 00:06:54.663 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.663 14:52:13 -- accel/accel.sh@21 -- # val= 00:06:54.663 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.663 14:52:13 -- accel/accel.sh@21 -- # val=0x1 00:06:54.663 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.663 14:52:13 -- accel/accel.sh@21 -- # val= 00:06:54.663 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.663 14:52:13 -- accel/accel.sh@21 -- # val= 00:06:54.663 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.663 14:52:13 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:54.663 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.663 14:52:13 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:54.663 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val=0 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val= 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val=software 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val=32 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val=32 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val=1 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val=Yes 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val= 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.664 14:52:13 -- accel/accel.sh@21 -- # val= 00:06:54.664 14:52:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.664 14:52:13 -- accel/accel.sh@20 -- # read -r var val 00:06:56.041 14:52:14 -- accel/accel.sh@21 -- # val= 00:06:56.041 14:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:56.041 14:52:14 -- accel/accel.sh@21 -- # val= 00:06:56.041 14:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:56.041 14:52:14 -- accel/accel.sh@21 -- # val= 00:06:56.041 14:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:56.041 14:52:14 -- accel/accel.sh@21 -- # val= 00:06:56.041 14:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:56.041 14:52:14 -- accel/accel.sh@21 -- # val= 00:06:56.041 14:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:56.041 14:52:14 -- accel/accel.sh@21 -- # val= 00:06:56.041 14:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # IFS=: 00:06:56.041 14:52:14 -- accel/accel.sh@20 -- # read -r var val 00:06:56.041 14:52:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.041 14:52:14 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:56.041 14:52:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.041 00:06:56.041 real 0m2.827s 00:06:56.041 user 0m2.560s 00:06:56.041 sys 0m0.274s 00:06:56.041 14:52:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.041 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:06:56.041 ************************************ 00:06:56.041 END TEST accel_copy_crc32c 00:06:56.041 ************************************ 00:06:56.041 14:52:14 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:56.041 14:52:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:56.041 14:52:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.041 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:06:56.041 ************************************ 00:06:56.041 START TEST accel_copy_crc32c_C2 00:06:56.041 ************************************ 00:06:56.041 14:52:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:56.041 14:52:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.041 14:52:14 -- accel/accel.sh@17 -- # local accel_module 00:06:56.041 14:52:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:56.041 14:52:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:56.041 14:52:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.041 14:52:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.041 14:52:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.041 14:52:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.041 14:52:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.041 14:52:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.041 14:52:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.041 14:52:14 -- accel/accel.sh@42 -- # jq -r . 00:06:56.041 [2024-06-11 14:52:14.540436] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:56.041 [2024-06-11 14:52:14.540500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093423 ] 00:06:56.041 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.041 [2024-06-11 14:52:14.631201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.041 [2024-06-11 14:52:14.715147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.419 14:52:15 -- accel/accel.sh@18 -- # out=' 00:06:57.419 SPDK Configuration: 00:06:57.419 Core mask: 0x1 00:06:57.419 00:06:57.419 Accel Perf Configuration: 00:06:57.419 Workload Type: copy_crc32c 00:06:57.419 CRC-32C seed: 0 00:06:57.419 Vector size: 4096 bytes 00:06:57.419 Transfer size: 8192 bytes 00:06:57.419 Vector count 2 00:06:57.419 Module: software 00:06:57.419 Queue depth: 32 00:06:57.419 Allocate depth: 32 00:06:57.419 # threads/core: 1 00:06:57.419 Run time: 1 seconds 00:06:57.419 Verify: Yes 00:06:57.419 00:06:57.419 Running for 1 seconds... 00:06:57.419 00:06:57.419 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.419 ------------------------------------------------------------------------------------ 00:06:57.419 0,0 146400/s 1143 MiB/s 0 0 00:06:57.419 ==================================================================================== 00:06:57.419 Total 146400/s 571 MiB/s 0 0' 00:06:57.419 14:52:15 -- accel/accel.sh@20 -- # IFS=: 00:06:57.419 14:52:15 -- accel/accel.sh@20 -- # read -r var val 00:06:57.419 14:52:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:57.419 14:52:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:57.419 14:52:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.419 14:52:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.419 14:52:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.419 14:52:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.419 14:52:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.419 14:52:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.419 14:52:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.419 14:52:15 -- accel/accel.sh@42 -- # jq -r . 00:06:57.420 [2024-06-11 14:52:15.955661] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:57.420 [2024-06-11 14:52:15.955736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093695 ] 00:06:57.420 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.420 [2024-06-11 14:52:16.041772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.420 [2024-06-11 14:52:16.124805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val= 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val= 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val=0x1 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val= 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val= 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val=0 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val= 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val=software 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val=32 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val=32 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val=1 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val=Yes 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val= 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.420 14:52:16 -- accel/accel.sh@21 -- # val= 00:06:57.420 14:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.420 14:52:16 -- accel/accel.sh@20 -- # read -r var val 00:06:58.799 14:52:17 -- accel/accel.sh@21 -- # val= 00:06:58.800 14:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.800 14:52:17 -- accel/accel.sh@21 -- # val= 00:06:58.800 14:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.800 14:52:17 -- accel/accel.sh@21 -- # val= 00:06:58.800 14:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.800 14:52:17 -- accel/accel.sh@21 -- # val= 00:06:58.800 14:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.800 14:52:17 -- accel/accel.sh@21 -- # val= 00:06:58.800 14:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.800 14:52:17 -- accel/accel.sh@21 -- # val= 00:06:58.800 14:52:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.800 14:52:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.800 14:52:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.800 14:52:17 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:58.800 14:52:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.800 00:06:58.800 real 0m2.829s 00:06:58.800 user 0m2.557s 00:06:58.800 sys 0m0.278s 00:06:58.800 14:52:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.800 14:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.800 ************************************ 00:06:58.800 END TEST accel_copy_crc32c_C2 00:06:58.800 ************************************ 00:06:58.800 14:52:17 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:58.800 14:52:17 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:58.800 14:52:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.800 14:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.800 ************************************ 00:06:58.800 START TEST accel_dualcast 00:06:58.800 ************************************ 00:06:58.800 14:52:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:58.800 14:52:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.800 14:52:17 -- accel/accel.sh@17 -- # local accel_module 00:06:58.800 14:52:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:58.800 14:52:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:58.800 14:52:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.800 14:52:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.800 14:52:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.800 14:52:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.800 14:52:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.800 14:52:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.800 14:52:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.800 14:52:17 -- accel/accel.sh@42 -- # jq -r . 00:06:58.800 [2024-06-11 14:52:17.408283] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:58.800 [2024-06-11 14:52:17.408342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093977 ] 00:06:58.800 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.800 [2024-06-11 14:52:17.494414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.800 [2024-06-11 14:52:17.581575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.178 14:52:18 -- accel/accel.sh@18 -- # out=' 00:07:00.178 SPDK Configuration: 00:07:00.178 Core mask: 0x1 00:07:00.178 00:07:00.178 Accel Perf Configuration: 00:07:00.178 Workload Type: dualcast 00:07:00.178 Transfer size: 4096 bytes 00:07:00.178 Vector count 1 00:07:00.178 Module: software 00:07:00.178 Queue depth: 32 00:07:00.178 Allocate depth: 32 00:07:00.178 # threads/core: 1 00:07:00.178 Run time: 1 seconds 00:07:00.178 Verify: Yes 00:07:00.178 00:07:00.178 Running for 1 seconds... 00:07:00.178 00:07:00.178 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.178 ------------------------------------------------------------------------------------ 00:07:00.178 0,0 312480/s 1220 MiB/s 0 0 00:07:00.178 ==================================================================================== 00:07:00.178 Total 312480/s 1220 MiB/s 0 0' 00:07:00.178 14:52:18 -- accel/accel.sh@20 -- # IFS=: 00:07:00.178 14:52:18 -- accel/accel.sh@20 -- # read -r var val 00:07:00.178 14:52:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:00.178 14:52:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:00.178 14:52:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.178 14:52:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.178 14:52:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.178 14:52:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.178 14:52:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.178 14:52:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.178 14:52:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.178 14:52:18 -- accel/accel.sh@42 -- # jq -r . 00:07:00.178 [2024-06-11 14:52:18.819324] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:00.178 [2024-06-11 14:52:18.819384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094242 ] 00:07:00.178 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.178 [2024-06-11 14:52:18.904477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.178 [2024-06-11 14:52:18.987858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val= 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val= 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val=0x1 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val= 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val= 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val=dualcast 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val= 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val=software 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val=32 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val=32 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val=1 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val=Yes 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val= 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.438 14:52:19 -- accel/accel.sh@21 -- # val= 00:07:00.438 14:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.438 14:52:19 -- accel/accel.sh@20 -- # read -r var val 00:07:01.374 14:52:20 -- accel/accel.sh@21 -- # val= 00:07:01.374 14:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:01.374 14:52:20 -- accel/accel.sh@21 -- # val= 00:07:01.374 14:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:01.374 14:52:20 -- accel/accel.sh@21 -- # val= 00:07:01.374 14:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:01.374 14:52:20 -- accel/accel.sh@21 -- # val= 00:07:01.374 14:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:01.374 14:52:20 -- accel/accel.sh@21 -- # val= 00:07:01.374 14:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:01.374 14:52:20 -- accel/accel.sh@21 -- # val= 00:07:01.374 14:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # IFS=: 00:07:01.374 14:52:20 -- accel/accel.sh@20 -- # read -r var val 00:07:01.374 14:52:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.374 14:52:20 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:01.374 14:52:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.374 00:07:01.374 real 0m2.823s 00:07:01.374 user 0m2.558s 00:07:01.374 sys 0m0.270s 00:07:01.374 14:52:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.374 14:52:20 -- common/autotest_common.sh@10 -- # set +x 00:07:01.374 ************************************ 00:07:01.374 END TEST accel_dualcast 00:07:01.374 ************************************ 00:07:01.633 14:52:20 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:01.633 14:52:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:01.633 14:52:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.633 14:52:20 -- common/autotest_common.sh@10 -- # set +x 00:07:01.633 ************************************ 00:07:01.633 START TEST accel_compare 00:07:01.633 ************************************ 00:07:01.633 14:52:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:01.633 14:52:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.633 14:52:20 -- accel/accel.sh@17 -- # local accel_module 00:07:01.633 14:52:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:01.633 14:52:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:01.633 14:52:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.633 14:52:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.633 14:52:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.633 14:52:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.633 14:52:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.633 14:52:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.633 14:52:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.633 14:52:20 -- accel/accel.sh@42 -- # jq -r . 00:07:01.633 [2024-06-11 14:52:20.269535] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:01.633 [2024-06-11 14:52:20.269593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094529 ] 00:07:01.633 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.633 [2024-06-11 14:52:20.355339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.633 [2024-06-11 14:52:20.439789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.011 14:52:21 -- accel/accel.sh@18 -- # out=' 00:07:03.011 SPDK Configuration: 00:07:03.011 Core mask: 0x1 00:07:03.011 00:07:03.011 Accel Perf Configuration: 00:07:03.011 Workload Type: compare 00:07:03.011 Transfer size: 4096 bytes 00:07:03.011 Vector count 1 00:07:03.011 Module: software 00:07:03.011 Queue depth: 32 00:07:03.011 Allocate depth: 32 00:07:03.011 # threads/core: 1 00:07:03.011 Run time: 1 seconds 00:07:03.011 Verify: Yes 00:07:03.011 00:07:03.011 Running for 1 seconds... 00:07:03.011 00:07:03.011 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.011 ------------------------------------------------------------------------------------ 00:07:03.011 0,0 381536/s 1490 MiB/s 0 0 00:07:03.011 ==================================================================================== 00:07:03.011 Total 381536/s 1490 MiB/s 0 0' 00:07:03.011 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.011 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.011 14:52:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:03.011 14:52:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:03.011 14:52:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.011 14:52:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.011 14:52:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.011 14:52:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.011 14:52:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.011 14:52:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.011 14:52:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.011 14:52:21 -- accel/accel.sh@42 -- # jq -r . 00:07:03.011 [2024-06-11 14:52:21.678863] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:03.011 [2024-06-11 14:52:21.678925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094796 ] 00:07:03.011 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.011 [2024-06-11 14:52:21.765848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.011 [2024-06-11 14:52:21.849019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val= 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val= 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val=0x1 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val= 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val= 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val=compare 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val= 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val=software 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val=32 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val=32 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val=1 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val=Yes 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val= 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.269 14:52:21 -- accel/accel.sh@21 -- # val= 00:07:03.269 14:52:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.269 14:52:21 -- accel/accel.sh@20 -- # read -r var val 00:07:04.645 14:52:23 -- accel/accel.sh@21 -- # val= 00:07:04.645 14:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.645 14:52:23 -- accel/accel.sh@21 -- # val= 00:07:04.645 14:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.645 14:52:23 -- accel/accel.sh@21 -- # val= 00:07:04.645 14:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.645 14:52:23 -- accel/accel.sh@21 -- # val= 00:07:04.645 14:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.645 14:52:23 -- accel/accel.sh@21 -- # val= 00:07:04.645 14:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.645 14:52:23 -- accel/accel.sh@21 -- # val= 00:07:04.645 14:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.645 14:52:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.645 14:52:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.645 14:52:23 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:04.645 14:52:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.645 00:07:04.645 real 0m2.823s 00:07:04.645 user 0m2.549s 00:07:04.645 sys 0m0.278s 00:07:04.645 14:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.645 14:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:04.645 ************************************ 00:07:04.645 END TEST accel_compare 00:07:04.645 ************************************ 00:07:04.645 14:52:23 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:04.645 14:52:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:04.645 14:52:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.645 14:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:04.645 ************************************ 00:07:04.645 START TEST accel_xor 00:07:04.645 ************************************ 00:07:04.645 14:52:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:04.645 14:52:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.645 14:52:23 -- accel/accel.sh@17 -- # local accel_module 00:07:04.645 14:52:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:04.645 14:52:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:04.645 14:52:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.645 14:52:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.645 14:52:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.645 14:52:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.645 14:52:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.645 14:52:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.645 14:52:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.645 14:52:23 -- accel/accel.sh@42 -- # jq -r . 00:07:04.645 [2024-06-11 14:52:23.132839] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:04.645 [2024-06-11 14:52:23.132902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095085 ] 00:07:04.645 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.645 [2024-06-11 14:52:23.222664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.645 [2024-06-11 14:52:23.305373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.021 14:52:24 -- accel/accel.sh@18 -- # out=' 00:07:06.021 SPDK Configuration: 00:07:06.021 Core mask: 0x1 00:07:06.021 00:07:06.021 Accel Perf Configuration: 00:07:06.021 Workload Type: xor 00:07:06.021 Source buffers: 2 00:07:06.021 Transfer size: 4096 bytes 00:07:06.021 Vector count 1 00:07:06.021 Module: software 00:07:06.021 Queue depth: 32 00:07:06.021 Allocate depth: 32 00:07:06.021 # threads/core: 1 00:07:06.021 Run time: 1 seconds 00:07:06.021 Verify: Yes 00:07:06.021 00:07:06.021 Running for 1 seconds... 00:07:06.021 00:07:06.021 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.021 ------------------------------------------------------------------------------------ 00:07:06.021 0,0 313536/s 1224 MiB/s 0 0 00:07:06.022 ==================================================================================== 00:07:06.022 Total 313536/s 1224 MiB/s 0 0' 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:06.022 14:52:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:06.022 14:52:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.022 14:52:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.022 14:52:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.022 14:52:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.022 14:52:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.022 14:52:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.022 14:52:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.022 14:52:24 -- accel/accel.sh@42 -- # jq -r . 00:07:06.022 [2024-06-11 14:52:24.542856] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:06.022 [2024-06-11 14:52:24.542916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095351 ] 00:07:06.022 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.022 [2024-06-11 14:52:24.630245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.022 [2024-06-11 14:52:24.711810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val= 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val= 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val=0x1 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val= 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val= 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val=xor 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val=2 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val= 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val=software 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val=32 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val=32 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val=1 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val=Yes 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val= 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.022 14:52:24 -- accel/accel.sh@21 -- # val= 00:07:06.022 14:52:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.022 14:52:24 -- accel/accel.sh@20 -- # read -r var val 00:07:07.400 14:52:25 -- accel/accel.sh@21 -- # val= 00:07:07.400 14:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:07.400 14:52:25 -- accel/accel.sh@21 -- # val= 00:07:07.400 14:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:07.400 14:52:25 -- accel/accel.sh@21 -- # val= 00:07:07.400 14:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:07.400 14:52:25 -- accel/accel.sh@21 -- # val= 00:07:07.400 14:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:07.400 14:52:25 -- accel/accel.sh@21 -- # val= 00:07:07.400 14:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:07.400 14:52:25 -- accel/accel.sh@21 -- # val= 00:07:07.400 14:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # IFS=: 00:07:07.400 14:52:25 -- accel/accel.sh@20 -- # read -r var val 00:07:07.400 14:52:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.400 14:52:25 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:07.400 14:52:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.400 00:07:07.400 real 0m2.823s 00:07:07.400 user 0m2.553s 00:07:07.400 sys 0m0.276s 00:07:07.400 14:52:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.400 14:52:25 -- common/autotest_common.sh@10 -- # set +x 00:07:07.400 ************************************ 00:07:07.400 END TEST accel_xor 00:07:07.400 ************************************ 00:07:07.400 14:52:25 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:07.400 14:52:25 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:07.400 14:52:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.400 14:52:25 -- common/autotest_common.sh@10 -- # set +x 00:07:07.400 ************************************ 00:07:07.400 START TEST accel_xor 00:07:07.400 ************************************ 00:07:07.400 14:52:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:07.400 14:52:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.400 14:52:25 -- accel/accel.sh@17 -- # local accel_module 00:07:07.400 14:52:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:07.400 14:52:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:07.400 14:52:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.400 14:52:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.400 14:52:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.400 14:52:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.400 14:52:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.400 14:52:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.400 14:52:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.400 14:52:25 -- accel/accel.sh@42 -- # jq -r . 00:07:07.401 [2024-06-11 14:52:25.995734] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:07.401 [2024-06-11 14:52:25.995802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095634 ] 00:07:07.401 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.401 [2024-06-11 14:52:26.084196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.401 [2024-06-11 14:52:26.167922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.778 14:52:27 -- accel/accel.sh@18 -- # out=' 00:07:08.778 SPDK Configuration: 00:07:08.778 Core mask: 0x1 00:07:08.778 00:07:08.778 Accel Perf Configuration: 00:07:08.778 Workload Type: xor 00:07:08.778 Source buffers: 3 00:07:08.778 Transfer size: 4096 bytes 00:07:08.778 Vector count 1 00:07:08.778 Module: software 00:07:08.778 Queue depth: 32 00:07:08.778 Allocate depth: 32 00:07:08.778 # threads/core: 1 00:07:08.778 Run time: 1 seconds 00:07:08.778 Verify: Yes 00:07:08.778 00:07:08.778 Running for 1 seconds... 00:07:08.778 00:07:08.778 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.778 ------------------------------------------------------------------------------------ 00:07:08.778 0,0 294464/s 1150 MiB/s 0 0 00:07:08.778 ==================================================================================== 00:07:08.778 Total 294464/s 1150 MiB/s 0 0' 00:07:08.778 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:08.778 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:08.778 14:52:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:08.778 14:52:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:08.778 14:52:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.778 14:52:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.778 14:52:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.778 14:52:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.778 14:52:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.778 14:52:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.778 14:52:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.778 14:52:27 -- accel/accel.sh@42 -- # jq -r . 00:07:08.778 [2024-06-11 14:52:27.406888] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:08.778 [2024-06-11 14:52:27.406966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095906 ] 00:07:08.778 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.778 [2024-06-11 14:52:27.493360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.778 [2024-06-11 14:52:27.574930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val= 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val= 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val=0x1 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val= 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val= 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val=xor 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val=3 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val= 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val=software 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val=32 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val=32 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val=1 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val=Yes 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val= 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.036 14:52:27 -- accel/accel.sh@21 -- # val= 00:07:09.036 14:52:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.036 14:52:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.971 14:52:28 -- accel/accel.sh@21 -- # val= 00:07:09.971 14:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.971 14:52:28 -- accel/accel.sh@21 -- # val= 00:07:09.971 14:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.971 14:52:28 -- accel/accel.sh@21 -- # val= 00:07:09.971 14:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.971 14:52:28 -- accel/accel.sh@21 -- # val= 00:07:09.971 14:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.971 14:52:28 -- accel/accel.sh@21 -- # val= 00:07:09.971 14:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.971 14:52:28 -- accel/accel.sh@21 -- # val= 00:07:09.971 14:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.971 14:52:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.971 14:52:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.971 14:52:28 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:09.971 14:52:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.971 00:07:09.971 real 0m2.824s 00:07:09.971 user 0m2.549s 00:07:09.971 sys 0m0.279s 00:07:09.971 14:52:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.971 14:52:28 -- common/autotest_common.sh@10 -- # set +x 00:07:09.971 ************************************ 00:07:09.972 END TEST accel_xor 00:07:09.972 ************************************ 00:07:10.230 14:52:28 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:10.230 14:52:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:10.230 14:52:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.230 14:52:28 -- common/autotest_common.sh@10 -- # set +x 00:07:10.230 ************************************ 00:07:10.230 START TEST accel_dif_verify 00:07:10.230 ************************************ 00:07:10.230 14:52:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:10.231 14:52:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.231 14:52:28 -- accel/accel.sh@17 -- # local accel_module 00:07:10.231 14:52:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:10.231 14:52:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:10.231 14:52:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.231 14:52:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.231 14:52:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.231 14:52:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.231 14:52:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.231 14:52:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.231 14:52:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.231 14:52:28 -- accel/accel.sh@42 -- # jq -r . 00:07:10.231 [2024-06-11 14:52:28.858183] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:10.231 [2024-06-11 14:52:28.858239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096185 ] 00:07:10.231 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.231 [2024-06-11 14:52:28.944599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.231 [2024-06-11 14:52:29.028859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.607 14:52:30 -- accel/accel.sh@18 -- # out=' 00:07:11.607 SPDK Configuration: 00:07:11.607 Core mask: 0x1 00:07:11.607 00:07:11.607 Accel Perf Configuration: 00:07:11.607 Workload Type: dif_verify 00:07:11.607 Vector size: 4096 bytes 00:07:11.607 Transfer size: 4096 bytes 00:07:11.607 Block size: 512 bytes 00:07:11.607 Metadata size: 8 bytes 00:07:11.607 Vector count 1 00:07:11.607 Module: software 00:07:11.607 Queue depth: 32 00:07:11.607 Allocate depth: 32 00:07:11.607 # threads/core: 1 00:07:11.607 Run time: 1 seconds 00:07:11.607 Verify: No 00:07:11.607 00:07:11.607 Running for 1 seconds... 00:07:11.607 00:07:11.607 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.607 ------------------------------------------------------------------------------------ 00:07:11.607 0,0 81568/s 323 MiB/s 0 0 00:07:11.607 ==================================================================================== 00:07:11.607 Total 81568/s 318 MiB/s 0 0' 00:07:11.607 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.607 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.607 14:52:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:11.607 14:52:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:11.607 14:52:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.608 14:52:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.608 14:52:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.608 14:52:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.608 14:52:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.608 14:52:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.608 14:52:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.608 14:52:30 -- accel/accel.sh@42 -- # jq -r . 00:07:11.608 [2024-06-11 14:52:30.267754] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:11.608 [2024-06-11 14:52:30.267813] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096460 ] 00:07:11.608 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.608 [2024-06-11 14:52:30.353546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.608 [2024-06-11 14:52:30.435283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val= 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val= 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val=0x1 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val= 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val= 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val=dif_verify 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val= 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val=software 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val=32 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val=32 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val=1 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val=No 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val= 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.867 14:52:30 -- accel/accel.sh@21 -- # val= 00:07:11.867 14:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.867 14:52:30 -- accel/accel.sh@20 -- # read -r var val 00:07:13.244 14:52:31 -- accel/accel.sh@21 -- # val= 00:07:13.244 14:52:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.244 14:52:31 -- accel/accel.sh@21 -- # val= 00:07:13.244 14:52:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.244 14:52:31 -- accel/accel.sh@21 -- # val= 00:07:13.244 14:52:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.244 14:52:31 -- accel/accel.sh@21 -- # val= 00:07:13.244 14:52:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.244 14:52:31 -- accel/accel.sh@21 -- # val= 00:07:13.244 14:52:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.244 14:52:31 -- accel/accel.sh@21 -- # val= 00:07:13.244 14:52:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.244 14:52:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.244 14:52:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.244 14:52:31 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:13.244 14:52:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.244 00:07:13.244 real 0m2.826s 00:07:13.244 user 0m2.554s 00:07:13.244 sys 0m0.277s 00:07:13.244 14:52:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.244 14:52:31 -- common/autotest_common.sh@10 -- # set +x 00:07:13.244 ************************************ 00:07:13.244 END TEST accel_dif_verify 00:07:13.244 ************************************ 00:07:13.244 14:52:31 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:13.244 14:52:31 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:13.244 14:52:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.244 14:52:31 -- common/autotest_common.sh@10 -- # set +x 00:07:13.244 ************************************ 00:07:13.244 START TEST accel_dif_generate 00:07:13.244 ************************************ 00:07:13.244 14:52:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:13.244 14:52:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.244 14:52:31 -- accel/accel.sh@17 -- # local accel_module 00:07:13.244 14:52:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:13.244 14:52:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:13.244 14:52:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.244 14:52:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.244 14:52:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.244 14:52:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.244 14:52:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.244 14:52:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.244 14:52:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.244 14:52:31 -- accel/accel.sh@42 -- # jq -r . 00:07:13.244 [2024-06-11 14:52:31.722576] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:13.244 [2024-06-11 14:52:31.722634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096739 ] 00:07:13.244 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.244 [2024-06-11 14:52:31.807745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.244 [2024-06-11 14:52:31.890771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.618 14:52:33 -- accel/accel.sh@18 -- # out=' 00:07:14.618 SPDK Configuration: 00:07:14.618 Core mask: 0x1 00:07:14.618 00:07:14.618 Accel Perf Configuration: 00:07:14.618 Workload Type: dif_generate 00:07:14.618 Vector size: 4096 bytes 00:07:14.618 Transfer size: 4096 bytes 00:07:14.618 Block size: 512 bytes 00:07:14.618 Metadata size: 8 bytes 00:07:14.618 Vector count 1 00:07:14.618 Module: software 00:07:14.618 Queue depth: 32 00:07:14.618 Allocate depth: 32 00:07:14.618 # threads/core: 1 00:07:14.618 Run time: 1 seconds 00:07:14.618 Verify: No 00:07:14.618 00:07:14.618 Running for 1 seconds... 00:07:14.618 00:07:14.618 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.618 ------------------------------------------------------------------------------------ 00:07:14.618 0,0 98368/s 390 MiB/s 0 0 00:07:14.618 ==================================================================================== 00:07:14.618 Total 98368/s 384 MiB/s 0 0' 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:14.618 14:52:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:14.618 14:52:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.618 14:52:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.618 14:52:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.618 14:52:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.618 14:52:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.618 14:52:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.618 14:52:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.618 14:52:33 -- accel/accel.sh@42 -- # jq -r . 00:07:14.618 [2024-06-11 14:52:33.129525] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:14.618 [2024-06-11 14:52:33.129600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097005 ] 00:07:14.618 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.618 [2024-06-11 14:52:33.217172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.618 [2024-06-11 14:52:33.299280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val= 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val= 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val=0x1 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val= 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val= 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val=dif_generate 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val= 00:07:14.618 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.618 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.618 14:52:33 -- accel/accel.sh@21 -- # val=software 00:07:14.619 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.619 14:52:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.619 14:52:33 -- accel/accel.sh@21 -- # val=32 00:07:14.619 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.619 14:52:33 -- accel/accel.sh@21 -- # val=32 00:07:14.619 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.619 14:52:33 -- accel/accel.sh@21 -- # val=1 00:07:14.619 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.619 14:52:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.619 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.619 14:52:33 -- accel/accel.sh@21 -- # val=No 00:07:14.619 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.619 14:52:33 -- accel/accel.sh@21 -- # val= 00:07:14.619 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.619 14:52:33 -- accel/accel.sh@21 -- # val= 00:07:14.619 14:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.619 14:52:33 -- accel/accel.sh@20 -- # read -r var val 00:07:15.994 14:52:34 -- accel/accel.sh@21 -- # val= 00:07:15.994 14:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.994 14:52:34 -- accel/accel.sh@21 -- # val= 00:07:15.994 14:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.994 14:52:34 -- accel/accel.sh@21 -- # val= 00:07:15.994 14:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.994 14:52:34 -- accel/accel.sh@21 -- # val= 00:07:15.994 14:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.994 14:52:34 -- accel/accel.sh@21 -- # val= 00:07:15.994 14:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.994 14:52:34 -- accel/accel.sh@21 -- # val= 00:07:15.994 14:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.994 14:52:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.994 14:52:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.994 14:52:34 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:15.994 14:52:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.994 00:07:15.994 real 0m2.819s 00:07:15.994 user 0m2.546s 00:07:15.994 sys 0m0.280s 00:07:15.994 14:52:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.994 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.995 ************************************ 00:07:15.995 END TEST accel_dif_generate 00:07:15.995 ************************************ 00:07:15.995 14:52:34 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:15.995 14:52:34 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:15.995 14:52:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.995 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.995 ************************************ 00:07:15.995 START TEST accel_dif_generate_copy 00:07:15.995 ************************************ 00:07:15.995 14:52:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:15.995 14:52:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.995 14:52:34 -- accel/accel.sh@17 -- # local accel_module 00:07:15.995 14:52:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:15.995 14:52:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:15.995 14:52:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.995 14:52:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.995 14:52:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.995 14:52:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.995 14:52:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.995 14:52:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.995 14:52:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.995 14:52:34 -- accel/accel.sh@42 -- # jq -r . 00:07:15.995 [2024-06-11 14:52:34.581116] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:15.995 [2024-06-11 14:52:34.581175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097293 ] 00:07:15.995 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.995 [2024-06-11 14:52:34.666832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.995 [2024-06-11 14:52:34.749703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.428 14:52:35 -- accel/accel.sh@18 -- # out=' 00:07:17.428 SPDK Configuration: 00:07:17.428 Core mask: 0x1 00:07:17.428 00:07:17.428 Accel Perf Configuration: 00:07:17.428 Workload Type: dif_generate_copy 00:07:17.428 Vector size: 4096 bytes 00:07:17.428 Transfer size: 4096 bytes 00:07:17.428 Vector count 1 00:07:17.428 Module: software 00:07:17.428 Queue depth: 32 00:07:17.428 Allocate depth: 32 00:07:17.428 # threads/core: 1 00:07:17.428 Run time: 1 seconds 00:07:17.428 Verify: No 00:07:17.428 00:07:17.428 Running for 1 seconds... 00:07:17.428 00:07:17.428 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.428 ------------------------------------------------------------------------------------ 00:07:17.428 0,0 76096/s 301 MiB/s 0 0 00:07:17.428 ==================================================================================== 00:07:17.428 Total 76096/s 297 MiB/s 0 0' 00:07:17.428 14:52:35 -- accel/accel.sh@20 -- # IFS=: 00:07:17.428 14:52:35 -- accel/accel.sh@20 -- # read -r var val 00:07:17.428 14:52:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:17.428 14:52:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:17.428 14:52:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.428 14:52:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.428 14:52:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.428 14:52:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.428 14:52:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.428 14:52:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.428 14:52:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.428 14:52:35 -- accel/accel.sh@42 -- # jq -r . 00:07:17.428 [2024-06-11 14:52:35.988889] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:17.428 [2024-06-11 14:52:35.988966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097559 ] 00:07:17.428 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.428 [2024-06-11 14:52:36.076221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.428 [2024-06-11 14:52:36.157905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.428 14:52:36 -- accel/accel.sh@21 -- # val= 00:07:17.428 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.428 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.428 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.428 14:52:36 -- accel/accel.sh@21 -- # val= 00:07:17.428 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.428 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val=0x1 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val= 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val= 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val= 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val=software 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val=32 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val=32 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val=1 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val=No 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val= 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.429 14:52:36 -- accel/accel.sh@21 -- # val= 00:07:17.429 14:52:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.429 14:52:36 -- accel/accel.sh@20 -- # read -r var val 00:07:18.817 14:52:37 -- accel/accel.sh@21 -- # val= 00:07:18.817 14:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.817 14:52:37 -- accel/accel.sh@21 -- # val= 00:07:18.817 14:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.817 14:52:37 -- accel/accel.sh@21 -- # val= 00:07:18.817 14:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.817 14:52:37 -- accel/accel.sh@21 -- # val= 00:07:18.817 14:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.817 14:52:37 -- accel/accel.sh@21 -- # val= 00:07:18.817 14:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.817 14:52:37 -- accel/accel.sh@21 -- # val= 00:07:18.817 14:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.817 14:52:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.817 14:52:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.817 14:52:37 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:18.817 14:52:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.817 00:07:18.817 real 0m2.821s 00:07:18.817 user 0m2.547s 00:07:18.818 sys 0m0.278s 00:07:18.818 14:52:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.818 14:52:37 -- common/autotest_common.sh@10 -- # set +x 00:07:18.818 ************************************ 00:07:18.818 END TEST accel_dif_generate_copy 00:07:18.818 ************************************ 00:07:18.818 14:52:37 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:18.818 14:52:37 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.818 14:52:37 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:18.818 14:52:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.818 14:52:37 -- common/autotest_common.sh@10 -- # set +x 00:07:18.818 ************************************ 00:07:18.818 START TEST accel_comp 00:07:18.818 ************************************ 00:07:18.818 14:52:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.818 14:52:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.818 14:52:37 -- accel/accel.sh@17 -- # local accel_module 00:07:18.818 14:52:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.818 14:52:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.818 14:52:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.818 14:52:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.818 14:52:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.818 14:52:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.818 14:52:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.818 14:52:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.818 14:52:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.818 14:52:37 -- accel/accel.sh@42 -- # jq -r . 00:07:18.818 [2024-06-11 14:52:37.441649] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:18.818 [2024-06-11 14:52:37.441709] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097848 ] 00:07:18.818 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.818 [2024-06-11 14:52:37.527535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.818 [2024-06-11 14:52:37.610849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.196 14:52:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:20.196 00:07:20.196 SPDK Configuration: 00:07:20.196 Core mask: 0x1 00:07:20.196 00:07:20.196 Accel Perf Configuration: 00:07:20.196 Workload Type: compress 00:07:20.196 Transfer size: 4096 bytes 00:07:20.196 Vector count 1 00:07:20.196 Module: software 00:07:20.196 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.196 Queue depth: 32 00:07:20.196 Allocate depth: 32 00:07:20.196 # threads/core: 1 00:07:20.196 Run time: 1 seconds 00:07:20.196 Verify: No 00:07:20.196 00:07:20.196 Running for 1 seconds... 00:07:20.196 00:07:20.196 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.196 ------------------------------------------------------------------------------------ 00:07:20.196 0,0 40160/s 167 MiB/s 0 0 00:07:20.196 ==================================================================================== 00:07:20.196 Total 40160/s 156 MiB/s 0 0' 00:07:20.196 14:52:38 -- accel/accel.sh@20 -- # IFS=: 00:07:20.196 14:52:38 -- accel/accel.sh@20 -- # read -r var val 00:07:20.196 14:52:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.196 14:52:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.196 14:52:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.196 14:52:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.196 14:52:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.196 14:52:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.196 14:52:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.196 14:52:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.196 14:52:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.196 14:52:38 -- accel/accel.sh@42 -- # jq -r . 00:07:20.196 [2024-06-11 14:52:38.851893] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:20.196 [2024-06-11 14:52:38.851951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098117 ] 00:07:20.196 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.196 [2024-06-11 14:52:38.938278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.196 [2024-06-11 14:52:39.020295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val= 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val= 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val= 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val=0x1 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val= 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val= 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val=compress 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val= 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val=software 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val=32 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val=32 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val=1 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val=No 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val= 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.456 14:52:39 -- accel/accel.sh@21 -- # val= 00:07:20.456 14:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.456 14:52:39 -- accel/accel.sh@20 -- # read -r var val 00:07:21.393 14:52:40 -- accel/accel.sh@21 -- # val= 00:07:21.652 14:52:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.652 14:52:40 -- accel/accel.sh@21 -- # val= 00:07:21.652 14:52:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.652 14:52:40 -- accel/accel.sh@21 -- # val= 00:07:21.652 14:52:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.652 14:52:40 -- accel/accel.sh@21 -- # val= 00:07:21.652 14:52:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.652 14:52:40 -- accel/accel.sh@21 -- # val= 00:07:21.652 14:52:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.652 14:52:40 -- accel/accel.sh@21 -- # val= 00:07:21.652 14:52:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.652 14:52:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.652 14:52:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.652 14:52:40 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:21.652 14:52:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.652 00:07:21.652 real 0m2.826s 00:07:21.652 user 0m2.556s 00:07:21.652 sys 0m0.276s 00:07:21.652 14:52:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.652 14:52:40 -- common/autotest_common.sh@10 -- # set +x 00:07:21.652 ************************************ 00:07:21.652 END TEST accel_comp 00:07:21.652 ************************************ 00:07:21.652 14:52:40 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.652 14:52:40 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:21.652 14:52:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.652 14:52:40 -- common/autotest_common.sh@10 -- # set +x 00:07:21.652 ************************************ 00:07:21.652 START TEST accel_decomp 00:07:21.652 ************************************ 00:07:21.652 14:52:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.652 14:52:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.652 14:52:40 -- accel/accel.sh@17 -- # local accel_module 00:07:21.652 14:52:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.652 14:52:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.652 14:52:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.652 14:52:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.652 14:52:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.652 14:52:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.652 14:52:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.652 14:52:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.652 14:52:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.652 14:52:40 -- accel/accel.sh@42 -- # jq -r . 00:07:21.652 [2024-06-11 14:52:40.306010] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:21.652 [2024-06-11 14:52:40.306083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098396 ] 00:07:21.652 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.652 [2024-06-11 14:52:40.395134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.652 [2024-06-11 14:52:40.477957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.028 14:52:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:23.028 00:07:23.028 SPDK Configuration: 00:07:23.028 Core mask: 0x1 00:07:23.028 00:07:23.028 Accel Perf Configuration: 00:07:23.028 Workload Type: decompress 00:07:23.028 Transfer size: 4096 bytes 00:07:23.028 Vector count 1 00:07:23.028 Module: software 00:07:23.028 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.028 Queue depth: 32 00:07:23.028 Allocate depth: 32 00:07:23.028 # threads/core: 1 00:07:23.028 Run time: 1 seconds 00:07:23.028 Verify: Yes 00:07:23.028 00:07:23.028 Running for 1 seconds... 00:07:23.028 00:07:23.028 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.028 ------------------------------------------------------------------------------------ 00:07:23.028 0,0 46784/s 86 MiB/s 0 0 00:07:23.028 ==================================================================================== 00:07:23.028 Total 46784/s 182 MiB/s 0 0' 00:07:23.028 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.028 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.028 14:52:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.028 14:52:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.028 14:52:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.028 14:52:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.028 14:52:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.028 14:52:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.028 14:52:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.028 14:52:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.028 14:52:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.028 14:52:41 -- accel/accel.sh@42 -- # jq -r . 00:07:23.028 [2024-06-11 14:52:41.719305] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:23.028 [2024-06-11 14:52:41.719364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098668 ] 00:07:23.028 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.028 [2024-06-11 14:52:41.805150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.287 [2024-06-11 14:52:41.888069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val= 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val= 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val= 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val=0x1 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val= 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val= 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val=decompress 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val= 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val=software 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val=32 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val=32 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val=1 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val=Yes 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val= 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:23.287 14:52:41 -- accel/accel.sh@21 -- # val= 00:07:23.287 14:52:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # IFS=: 00:07:23.287 14:52:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.664 14:52:43 -- accel/accel.sh@21 -- # val= 00:07:24.664 14:52:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.664 14:52:43 -- accel/accel.sh@21 -- # val= 00:07:24.664 14:52:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.664 14:52:43 -- accel/accel.sh@21 -- # val= 00:07:24.664 14:52:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.664 14:52:43 -- accel/accel.sh@21 -- # val= 00:07:24.664 14:52:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.664 14:52:43 -- accel/accel.sh@21 -- # val= 00:07:24.664 14:52:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.664 14:52:43 -- accel/accel.sh@21 -- # val= 00:07:24.664 14:52:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.664 14:52:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.664 14:52:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.664 14:52:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:24.664 14:52:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.664 00:07:24.664 real 0m2.828s 00:07:24.664 user 0m2.562s 00:07:24.664 sys 0m0.272s 00:07:24.664 14:52:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.664 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:07:24.664 ************************************ 00:07:24.664 END TEST accel_decomp 00:07:24.664 ************************************ 00:07:24.664 14:52:43 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.664 14:52:43 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:24.664 14:52:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.664 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:07:24.664 ************************************ 00:07:24.664 START TEST accel_decmop_full 00:07:24.664 ************************************ 00:07:24.664 14:52:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.664 14:52:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.664 14:52:43 -- accel/accel.sh@17 -- # local accel_module 00:07:24.664 14:52:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.664 14:52:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.664 14:52:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.664 14:52:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.664 14:52:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.664 14:52:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.664 14:52:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.664 14:52:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.664 14:52:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.664 14:52:43 -- accel/accel.sh@42 -- # jq -r . 00:07:24.664 [2024-06-11 14:52:43.173646] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:24.664 [2024-06-11 14:52:43.173711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098950 ] 00:07:24.664 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.664 [2024-06-11 14:52:43.263389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.664 [2024-06-11 14:52:43.346509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.042 14:52:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.042 00:07:26.042 SPDK Configuration: 00:07:26.042 Core mask: 0x1 00:07:26.042 00:07:26.042 Accel Perf Configuration: 00:07:26.042 Workload Type: decompress 00:07:26.042 Transfer size: 111250 bytes 00:07:26.042 Vector count 1 00:07:26.042 Module: software 00:07:26.042 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.042 Queue depth: 32 00:07:26.042 Allocate depth: 32 00:07:26.042 # threads/core: 1 00:07:26.042 Run time: 1 seconds 00:07:26.042 Verify: Yes 00:07:26.042 00:07:26.042 Running for 1 seconds... 00:07:26.042 00:07:26.042 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.042 ------------------------------------------------------------------------------------ 00:07:26.042 0,0 3136/s 129 MiB/s 0 0 00:07:26.042 ==================================================================================== 00:07:26.042 Total 3136/s 332 MiB/s 0 0' 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.042 14:52:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.042 14:52:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.042 14:52:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.042 14:52:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.042 14:52:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.042 14:52:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.042 14:52:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.042 14:52:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.042 14:52:44 -- accel/accel.sh@42 -- # jq -r . 00:07:26.042 [2024-06-11 14:52:44.597919] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:26.042 [2024-06-11 14:52:44.597995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099223 ] 00:07:26.042 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.042 [2024-06-11 14:52:44.683964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.042 [2024-06-11 14:52:44.765779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val= 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val= 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val= 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val=0x1 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val= 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val= 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val=decompress 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val= 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val=software 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val=32 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val=32 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val=1 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val=Yes 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val= 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:26.042 14:52:44 -- accel/accel.sh@21 -- # val= 00:07:26.042 14:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # IFS=: 00:07:26.042 14:52:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.417 14:52:45 -- accel/accel.sh@21 -- # val= 00:07:27.417 14:52:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # IFS=: 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # read -r var val 00:07:27.417 14:52:45 -- accel/accel.sh@21 -- # val= 00:07:27.417 14:52:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # IFS=: 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # read -r var val 00:07:27.417 14:52:45 -- accel/accel.sh@21 -- # val= 00:07:27.417 14:52:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # IFS=: 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # read -r var val 00:07:27.417 14:52:45 -- accel/accel.sh@21 -- # val= 00:07:27.417 14:52:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # IFS=: 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # read -r var val 00:07:27.417 14:52:45 -- accel/accel.sh@21 -- # val= 00:07:27.417 14:52:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # IFS=: 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # read -r var val 00:07:27.417 14:52:45 -- accel/accel.sh@21 -- # val= 00:07:27.417 14:52:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # IFS=: 00:07:27.417 14:52:45 -- accel/accel.sh@20 -- # read -r var val 00:07:27.417 14:52:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.417 14:52:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:27.417 14:52:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.417 00:07:27.417 real 0m2.849s 00:07:27.417 user 0m2.568s 00:07:27.417 sys 0m0.286s 00:07:27.417 14:52:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.417 14:52:45 -- common/autotest_common.sh@10 -- # set +x 00:07:27.417 ************************************ 00:07:27.417 END TEST accel_decmop_full 00:07:27.417 ************************************ 00:07:27.417 14:52:46 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.417 14:52:46 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:27.417 14:52:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.417 14:52:46 -- common/autotest_common.sh@10 -- # set +x 00:07:27.417 ************************************ 00:07:27.417 START TEST accel_decomp_mcore 00:07:27.417 ************************************ 00:07:27.417 14:52:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.417 14:52:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.417 14:52:46 -- accel/accel.sh@17 -- # local accel_module 00:07:27.418 14:52:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.418 14:52:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.418 14:52:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.418 14:52:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.418 14:52:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.418 14:52:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.418 14:52:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.418 14:52:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.418 14:52:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.418 14:52:46 -- accel/accel.sh@42 -- # jq -r . 00:07:27.418 [2024-06-11 14:52:46.060603] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:27.418 [2024-06-11 14:52:46.060666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099503 ] 00:07:27.418 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.418 [2024-06-11 14:52:46.149564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.418 [2024-06-11 14:52:46.236076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.418 [2024-06-11 14:52:46.236177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.418 [2024-06-11 14:52:46.236267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.418 [2024-06-11 14:52:46.236268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.046 14:52:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:29.046 00:07:29.046 SPDK Configuration: 00:07:29.046 Core mask: 0xf 00:07:29.046 00:07:29.046 Accel Perf Configuration: 00:07:29.046 Workload Type: decompress 00:07:29.046 Transfer size: 4096 bytes 00:07:29.046 Vector count 1 00:07:29.046 Module: software 00:07:29.046 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.046 Queue depth: 32 00:07:29.046 Allocate depth: 32 00:07:29.046 # threads/core: 1 00:07:29.046 Run time: 1 seconds 00:07:29.046 Verify: Yes 00:07:29.046 00:07:29.046 Running for 1 seconds... 00:07:29.046 00:07:29.046 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.046 ------------------------------------------------------------------------------------ 00:07:29.046 0,0 42368/s 78 MiB/s 0 0 00:07:29.046 3,0 42624/s 78 MiB/s 0 0 00:07:29.046 2,0 67392/s 124 MiB/s 0 0 00:07:29.046 1,0 42560/s 78 MiB/s 0 0 00:07:29.046 ==================================================================================== 00:07:29.046 Total 194944/s 761 MiB/s 0 0' 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:29.046 14:52:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:29.046 14:52:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.046 14:52:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.046 14:52:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.046 14:52:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.046 14:52:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.046 14:52:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.046 14:52:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.046 14:52:47 -- accel/accel.sh@42 -- # jq -r . 00:07:29.046 [2024-06-11 14:52:47.486342] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:29.046 [2024-06-11 14:52:47.486415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099780 ] 00:07:29.046 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.046 [2024-06-11 14:52:47.572895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.046 [2024-06-11 14:52:47.658437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.046 [2024-06-11 14:52:47.658537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.046 [2024-06-11 14:52:47.658630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.046 [2024-06-11 14:52:47.658631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val= 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val= 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val= 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val=0xf 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val= 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val= 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val=decompress 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val= 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val=software 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val=32 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val=32 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val=1 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val=Yes 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val= 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.046 14:52:47 -- accel/accel.sh@21 -- # val= 00:07:29.046 14:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # IFS=: 00:07:29.046 14:52:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.427 14:52:48 -- accel/accel.sh@21 -- # val= 00:07:30.427 14:52:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.427 14:52:48 -- accel/accel.sh@21 -- # val= 00:07:30.427 14:52:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.427 14:52:48 -- accel/accel.sh@21 -- # val= 00:07:30.427 14:52:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.427 14:52:48 -- accel/accel.sh@21 -- # val= 00:07:30.427 14:52:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.427 14:52:48 -- accel/accel.sh@21 -- # val= 00:07:30.427 14:52:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.427 14:52:48 -- accel/accel.sh@21 -- # val= 00:07:30.427 14:52:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.427 14:52:48 -- accel/accel.sh@21 -- # val= 00:07:30.427 14:52:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.427 14:52:48 -- accel/accel.sh@21 -- # val= 00:07:30.427 14:52:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.427 14:52:48 -- accel/accel.sh@21 -- # val= 00:07:30.427 14:52:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.427 14:52:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.427 14:52:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.427 14:52:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:30.427 14:52:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.427 00:07:30.427 real 0m2.854s 00:07:30.427 user 0m9.261s 00:07:30.427 sys 0m0.307s 00:07:30.427 14:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.427 14:52:48 -- common/autotest_common.sh@10 -- # set +x 00:07:30.427 ************************************ 00:07:30.427 END TEST accel_decomp_mcore 00:07:30.427 ************************************ 00:07:30.427 14:52:48 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:30.427 14:52:48 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:30.427 14:52:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.427 14:52:48 -- common/autotest_common.sh@10 -- # set +x 00:07:30.427 ************************************ 00:07:30.427 START TEST accel_decomp_full_mcore 00:07:30.427 ************************************ 00:07:30.427 14:52:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:30.427 14:52:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.427 14:52:48 -- accel/accel.sh@17 -- # local accel_module 00:07:30.427 14:52:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:30.427 14:52:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:30.427 14:52:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.427 14:52:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.427 14:52:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.427 14:52:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.427 14:52:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.428 14:52:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.428 14:52:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.428 14:52:48 -- accel/accel.sh@42 -- # jq -r . 00:07:30.428 [2024-06-11 14:52:48.955663] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:30.428 [2024-06-11 14:52:48.955739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100063 ] 00:07:30.428 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.428 [2024-06-11 14:52:49.043273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.428 [2024-06-11 14:52:49.129191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.428 [2024-06-11 14:52:49.129292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.428 [2024-06-11 14:52:49.129403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.428 [2024-06-11 14:52:49.129402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.808 14:52:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:31.808 00:07:31.808 SPDK Configuration: 00:07:31.808 Core mask: 0xf 00:07:31.808 00:07:31.808 Accel Perf Configuration: 00:07:31.808 Workload Type: decompress 00:07:31.808 Transfer size: 111250 bytes 00:07:31.808 Vector count 1 00:07:31.808 Module: software 00:07:31.808 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.808 Queue depth: 32 00:07:31.808 Allocate depth: 32 00:07:31.808 # threads/core: 1 00:07:31.808 Run time: 1 seconds 00:07:31.808 Verify: Yes 00:07:31.808 00:07:31.808 Running for 1 seconds... 00:07:31.808 00:07:31.808 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.808 ------------------------------------------------------------------------------------ 00:07:31.808 0,0 3136/s 129 MiB/s 0 0 00:07:31.808 3,0 3136/s 129 MiB/s 0 0 00:07:31.808 2,0 5184/s 214 MiB/s 0 0 00:07:31.808 1,0 3136/s 129 MiB/s 0 0 00:07:31.808 ==================================================================================== 00:07:31.808 Total 14592/s 1548 MiB/s 0 0' 00:07:31.808 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.808 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.808 14:52:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.808 14:52:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.808 14:52:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.808 14:52:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.808 14:52:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.808 14:52:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.808 14:52:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.808 14:52:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.809 14:52:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.809 14:52:50 -- accel/accel.sh@42 -- # jq -r . 00:07:31.809 [2024-06-11 14:52:50.395558] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:31.809 [2024-06-11 14:52:50.395634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100330 ] 00:07:31.809 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.809 [2024-06-11 14:52:50.483477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.809 [2024-06-11 14:52:50.568896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.809 [2024-06-11 14:52:50.568985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.809 [2024-06-11 14:52:50.569101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.809 [2024-06-11 14:52:50.569102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val= 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val= 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val= 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val=0xf 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val= 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val= 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val=decompress 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val= 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val=software 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val=32 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val=32 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val=1 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val=Yes 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val= 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.809 14:52:50 -- accel/accel.sh@21 -- # val= 00:07:31.809 14:52:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.809 14:52:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.190 14:52:51 -- accel/accel.sh@21 -- # val= 00:07:33.190 14:52:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.190 14:52:51 -- accel/accel.sh@21 -- # val= 00:07:33.190 14:52:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.190 14:52:51 -- accel/accel.sh@21 -- # val= 00:07:33.190 14:52:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.190 14:52:51 -- accel/accel.sh@21 -- # val= 00:07:33.190 14:52:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.190 14:52:51 -- accel/accel.sh@21 -- # val= 00:07:33.190 14:52:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.190 14:52:51 -- accel/accel.sh@21 -- # val= 00:07:33.190 14:52:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.190 14:52:51 -- accel/accel.sh@21 -- # val= 00:07:33.190 14:52:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.190 14:52:51 -- accel/accel.sh@21 -- # val= 00:07:33.190 14:52:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.190 14:52:51 -- accel/accel.sh@21 -- # val= 00:07:33.190 14:52:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.190 14:52:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.190 14:52:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.190 14:52:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:33.190 14:52:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.190 00:07:33.190 real 0m2.884s 00:07:33.190 user 0m9.375s 00:07:33.190 sys 0m0.302s 00:07:33.190 14:52:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.190 14:52:51 -- common/autotest_common.sh@10 -- # set +x 00:07:33.190 ************************************ 00:07:33.190 END TEST accel_decomp_full_mcore 00:07:33.190 ************************************ 00:07:33.190 14:52:51 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.190 14:52:51 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:33.190 14:52:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.190 14:52:51 -- common/autotest_common.sh@10 -- # set +x 00:07:33.190 ************************************ 00:07:33.190 START TEST accel_decomp_mthread 00:07:33.190 ************************************ 00:07:33.190 14:52:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.190 14:52:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.190 14:52:51 -- accel/accel.sh@17 -- # local accel_module 00:07:33.190 14:52:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.190 14:52:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.190 14:52:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.190 14:52:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.190 14:52:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.190 14:52:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.190 14:52:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.190 14:52:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.190 14:52:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.190 14:52:51 -- accel/accel.sh@42 -- # jq -r . 00:07:33.190 [2024-06-11 14:52:51.878143] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:33.190 [2024-06-11 14:52:51.878220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100623 ] 00:07:33.190 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.190 [2024-06-11 14:52:51.965363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.448 [2024-06-11 14:52:52.048562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.827 14:52:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:34.827 00:07:34.827 SPDK Configuration: 00:07:34.827 Core mask: 0x1 00:07:34.827 00:07:34.827 Accel Perf Configuration: 00:07:34.827 Workload Type: decompress 00:07:34.827 Transfer size: 4096 bytes 00:07:34.827 Vector count 1 00:07:34.827 Module: software 00:07:34.827 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.827 Queue depth: 32 00:07:34.827 Allocate depth: 32 00:07:34.827 # threads/core: 2 00:07:34.828 Run time: 1 seconds 00:07:34.828 Verify: Yes 00:07:34.828 00:07:34.828 Running for 1 seconds... 00:07:34.828 00:07:34.828 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.828 ------------------------------------------------------------------------------------ 00:07:34.828 0,1 23712/s 43 MiB/s 0 0 00:07:34.828 0,0 23648/s 43 MiB/s 0 0 00:07:34.828 ==================================================================================== 00:07:34.828 Total 47360/s 185 MiB/s 0 0' 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:34.828 14:52:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:34.828 14:52:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.828 14:52:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.828 14:52:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.828 14:52:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.828 14:52:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.828 14:52:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.828 14:52:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.828 14:52:53 -- accel/accel.sh@42 -- # jq -r . 00:07:34.828 [2024-06-11 14:52:53.296119] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:34.828 [2024-06-11 14:52:53.296195] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100887 ] 00:07:34.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.828 [2024-06-11 14:52:53.383173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.828 [2024-06-11 14:52:53.468061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val= 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val= 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val= 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val=0x1 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val= 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val= 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val=decompress 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val= 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val=software 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val=32 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val=32 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val=2 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val=Yes 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val= 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.828 14:52:53 -- accel/accel.sh@21 -- # val= 00:07:34.828 14:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.828 14:52:53 -- accel/accel.sh@20 -- # read -r var val 00:07:36.207 14:52:54 -- accel/accel.sh@21 -- # val= 00:07:36.207 14:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # IFS=: 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # read -r var val 00:07:36.207 14:52:54 -- accel/accel.sh@21 -- # val= 00:07:36.207 14:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # IFS=: 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # read -r var val 00:07:36.207 14:52:54 -- accel/accel.sh@21 -- # val= 00:07:36.207 14:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # IFS=: 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # read -r var val 00:07:36.207 14:52:54 -- accel/accel.sh@21 -- # val= 00:07:36.207 14:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # IFS=: 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # read -r var val 00:07:36.207 14:52:54 -- accel/accel.sh@21 -- # val= 00:07:36.207 14:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # IFS=: 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # read -r var val 00:07:36.207 14:52:54 -- accel/accel.sh@21 -- # val= 00:07:36.207 14:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # IFS=: 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # read -r var val 00:07:36.207 14:52:54 -- accel/accel.sh@21 -- # val= 00:07:36.207 14:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # IFS=: 00:07:36.207 14:52:54 -- accel/accel.sh@20 -- # read -r var val 00:07:36.207 14:52:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.207 14:52:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:36.207 14:52:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.207 00:07:36.207 real 0m2.843s 00:07:36.207 user 0m2.575s 00:07:36.207 sys 0m0.274s 00:07:36.207 14:52:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.207 14:52:54 -- common/autotest_common.sh@10 -- # set +x 00:07:36.207 ************************************ 00:07:36.207 END TEST accel_decomp_mthread 00:07:36.207 ************************************ 00:07:36.207 14:52:54 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:36.207 14:52:54 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:36.207 14:52:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.207 14:52:54 -- common/autotest_common.sh@10 -- # set +x 00:07:36.207 ************************************ 00:07:36.207 START TEST accel_deomp_full_mthread 00:07:36.207 ************************************ 00:07:36.207 14:52:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:36.207 14:52:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.207 14:52:54 -- accel/accel.sh@17 -- # local accel_module 00:07:36.207 14:52:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:36.207 14:52:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:36.207 14:52:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.207 14:52:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.207 14:52:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.207 14:52:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.207 14:52:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.207 14:52:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.207 14:52:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.207 14:52:54 -- accel/accel.sh@42 -- # jq -r . 00:07:36.207 [2024-06-11 14:52:54.759558] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:36.207 [2024-06-11 14:52:54.759618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101174 ] 00:07:36.207 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.207 [2024-06-11 14:52:54.845396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.207 [2024-06-11 14:52:54.928274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.585 14:52:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:37.585 00:07:37.585 SPDK Configuration: 00:07:37.585 Core mask: 0x1 00:07:37.585 00:07:37.585 Accel Perf Configuration: 00:07:37.585 Workload Type: decompress 00:07:37.585 Transfer size: 111250 bytes 00:07:37.585 Vector count 1 00:07:37.585 Module: software 00:07:37.585 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.585 Queue depth: 32 00:07:37.585 Allocate depth: 32 00:07:37.585 # threads/core: 2 00:07:37.585 Run time: 1 seconds 00:07:37.585 Verify: Yes 00:07:37.585 00:07:37.585 Running for 1 seconds... 00:07:37.585 00:07:37.585 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.585 ------------------------------------------------------------------------------------ 00:07:37.585 0,1 1600/s 66 MiB/s 0 0 00:07:37.585 0,0 1600/s 66 MiB/s 0 0 00:07:37.585 ==================================================================================== 00:07:37.585 Total 3200/s 339 MiB/s 0 0' 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.585 14:52:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.585 14:52:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.585 14:52:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.585 14:52:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.585 14:52:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.585 14:52:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.585 14:52:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.585 14:52:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.585 14:52:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.585 14:52:56 -- accel/accel.sh@42 -- # jq -r . 00:07:37.585 [2024-06-11 14:52:56.205526] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:37.585 [2024-06-11 14:52:56.205601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101441 ] 00:07:37.585 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.585 [2024-06-11 14:52:56.291562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.585 [2024-06-11 14:52:56.373836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.585 14:52:56 -- accel/accel.sh@21 -- # val= 00:07:37.585 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.585 14:52:56 -- accel/accel.sh@21 -- # val= 00:07:37.585 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.585 14:52:56 -- accel/accel.sh@21 -- # val= 00:07:37.585 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.585 14:52:56 -- accel/accel.sh@21 -- # val=0x1 00:07:37.585 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.585 14:52:56 -- accel/accel.sh@21 -- # val= 00:07:37.585 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.585 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.844 14:52:56 -- accel/accel.sh@21 -- # val= 00:07:37.844 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.844 14:52:56 -- accel/accel.sh@21 -- # val=decompress 00:07:37.844 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.844 14:52:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.844 14:52:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:37.844 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.844 14:52:56 -- accel/accel.sh@21 -- # val= 00:07:37.844 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.844 14:52:56 -- accel/accel.sh@21 -- # val=software 00:07:37.844 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.844 14:52:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.844 14:52:56 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.844 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.844 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.844 14:52:56 -- accel/accel.sh@21 -- # val=32 00:07:37.845 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.845 14:52:56 -- accel/accel.sh@21 -- # val=32 00:07:37.845 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.845 14:52:56 -- accel/accel.sh@21 -- # val=2 00:07:37.845 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.845 14:52:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.845 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.845 14:52:56 -- accel/accel.sh@21 -- # val=Yes 00:07:37.845 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.845 14:52:56 -- accel/accel.sh@21 -- # val= 00:07:37.845 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.845 14:52:56 -- accel/accel.sh@21 -- # val= 00:07:37.845 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.845 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:07:38.783 14:52:57 -- accel/accel.sh@21 -- # val= 00:07:38.783 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.783 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.783 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.783 14:52:57 -- accel/accel.sh@21 -- # val= 00:07:38.783 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.783 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.784 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:07:39.043 14:52:57 -- accel/accel.sh@21 -- # val= 00:07:39.043 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.043 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:07:39.043 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:07:39.043 14:52:57 -- accel/accel.sh@21 -- # val= 00:07:39.043 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.043 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:07:39.043 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:07:39.043 14:52:57 -- accel/accel.sh@21 -- # val= 00:07:39.043 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.043 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:07:39.043 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:07:39.043 14:52:57 -- accel/accel.sh@21 -- # val= 00:07:39.043 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.043 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:07:39.043 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:07:39.043 14:52:57 -- accel/accel.sh@21 -- # val= 00:07:39.043 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.043 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:07:39.043 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:07:39.043 14:52:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.043 14:52:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:39.043 14:52:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.043 00:07:39.043 real 0m2.897s 00:07:39.043 user 0m2.623s 00:07:39.043 sys 0m0.279s 00:07:39.043 14:52:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.043 14:52:57 -- common/autotest_common.sh@10 -- # set +x 00:07:39.043 ************************************ 00:07:39.043 END TEST accel_deomp_full_mthread 00:07:39.043 ************************************ 00:07:39.043 14:52:57 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:39.043 14:52:57 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.043 14:52:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:39.043 14:52:57 -- accel/accel.sh@129 -- # build_accel_config 00:07:39.043 14:52:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.043 14:52:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.043 14:52:57 -- common/autotest_common.sh@10 -- # set +x 00:07:39.043 14:52:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.043 14:52:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.043 14:52:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.043 14:52:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.043 14:52:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.043 14:52:57 -- accel/accel.sh@42 -- # jq -r . 00:07:39.043 ************************************ 00:07:39.043 START TEST accel_dif_functional_tests 00:07:39.043 ************************************ 00:07:39.043 14:52:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.043 [2024-06-11 14:52:57.713189] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:39.043 [2024-06-11 14:52:57.713250] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101729 ] 00:07:39.043 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.043 [2024-06-11 14:52:57.801010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.303 [2024-06-11 14:52:57.886537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.303 [2024-06-11 14:52:57.886637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.303 [2024-06-11 14:52:57.886638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.303 00:07:39.303 00:07:39.303 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.303 http://cunit.sourceforge.net/ 00:07:39.303 00:07:39.303 00:07:39.303 Suite: accel_dif 00:07:39.303 Test: verify: DIF generated, GUARD check ...passed 00:07:39.303 Test: verify: DIF generated, APPTAG check ...passed 00:07:39.303 Test: verify: DIF generated, REFTAG check ...passed 00:07:39.303 Test: verify: DIF not generated, GUARD check ...[2024-06-11 14:52:57.961566] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:39.303 [2024-06-11 14:52:57.961618] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:39.303 passed 00:07:39.303 Test: verify: DIF not generated, APPTAG check ...[2024-06-11 14:52:57.961658] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:39.303 [2024-06-11 14:52:57.961677] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:39.303 passed 00:07:39.303 Test: verify: DIF not generated, REFTAG check ...[2024-06-11 14:52:57.961700] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:39.303 [2024-06-11 14:52:57.961719] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:39.303 passed 00:07:39.303 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:39.303 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-11 14:52:57.961774] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:39.303 passed 00:07:39.303 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:39.303 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:39.303 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:39.303 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-11 14:52:57.961910] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:39.303 passed 00:07:39.303 Test: generate copy: DIF generated, GUARD check ...passed 00:07:39.303 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:39.303 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:39.303 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:39.303 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:39.303 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:39.303 Test: generate copy: iovecs-len validate ...[2024-06-11 14:52:57.962142] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:39.303 passed 00:07:39.303 Test: generate copy: buffer alignment validate ...passed 00:07:39.303 00:07:39.303 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.303 suites 1 1 n/a 0 0 00:07:39.303 tests 20 20 20 0 0 00:07:39.303 asserts 204 204 204 0 n/a 00:07:39.303 00:07:39.303 Elapsed time = 0.002 seconds 00:07:39.563 00:07:39.563 real 0m0.499s 00:07:39.563 user 0m0.714s 00:07:39.563 sys 0m0.167s 00:07:39.563 14:52:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.563 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 ************************************ 00:07:39.563 END TEST accel_dif_functional_tests 00:07:39.563 ************************************ 00:07:39.563 00:07:39.563 real 1m0.671s 00:07:39.563 user 1m8.218s 00:07:39.563 sys 0m7.383s 00:07:39.563 14:52:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.563 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 ************************************ 00:07:39.563 END TEST accel 00:07:39.563 ************************************ 00:07:39.563 14:52:58 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:39.563 14:52:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:39.563 14:52:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.563 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 ************************************ 00:07:39.563 START TEST accel_rpc 00:07:39.563 ************************************ 00:07:39.563 14:52:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:39.563 * Looking for test storage... 00:07:39.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:39.563 14:52:58 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.563 14:52:58 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3101982 00:07:39.563 14:52:58 -- accel/accel_rpc.sh@15 -- # waitforlisten 3101982 00:07:39.563 14:52:58 -- common/autotest_common.sh@819 -- # '[' -z 3101982 ']' 00:07:39.563 14:52:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.563 14:52:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:39.563 14:52:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.563 14:52:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:39.563 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 14:52:58 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:39.563 [2024-06-11 14:52:58.362018] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:39.563 [2024-06-11 14:52:58.362087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101982 ] 00:07:39.563 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.823 [2024-06-11 14:52:58.450704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.823 [2024-06-11 14:52:58.539254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.823 [2024-06-11 14:52:58.539403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.758 14:52:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:40.758 14:52:59 -- common/autotest_common.sh@852 -- # return 0 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:40.758 14:52:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.758 14:52:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.758 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.758 ************************************ 00:07:40.758 START TEST accel_assign_opcode 00:07:40.758 ************************************ 00:07:40.758 14:52:59 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:40.758 14:52:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.758 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.758 [2024-06-11 14:52:59.285642] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:40.758 14:52:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:40.758 14:52:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.758 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.758 [2024-06-11 14:52:59.293659] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:40.758 14:52:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:40.758 14:52:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.758 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.758 14:52:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@42 -- # grep software 00:07:40.758 14:52:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.758 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.758 14:52:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.758 software 00:07:40.758 00:07:40.758 real 0m0.259s 00:07:40.758 user 0m0.047s 00:07:40.758 sys 0m0.007s 00:07:40.758 14:52:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.758 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.758 ************************************ 00:07:40.758 END TEST accel_assign_opcode 00:07:40.758 ************************************ 00:07:40.758 14:52:59 -- accel/accel_rpc.sh@55 -- # killprocess 3101982 00:07:40.758 14:52:59 -- common/autotest_common.sh@926 -- # '[' -z 3101982 ']' 00:07:40.758 14:52:59 -- common/autotest_common.sh@930 -- # kill -0 3101982 00:07:40.758 14:52:59 -- common/autotest_common.sh@931 -- # uname 00:07:40.759 14:52:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:40.759 14:52:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3101982 00:07:41.017 14:52:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:41.017 14:52:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:41.017 14:52:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3101982' 00:07:41.017 killing process with pid 3101982 00:07:41.017 14:52:59 -- common/autotest_common.sh@945 -- # kill 3101982 00:07:41.017 14:52:59 -- common/autotest_common.sh@950 -- # wait 3101982 00:07:41.275 00:07:41.275 real 0m1.731s 00:07:41.275 user 0m1.865s 00:07:41.275 sys 0m0.442s 00:07:41.275 14:52:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.275 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:07:41.275 ************************************ 00:07:41.275 END TEST accel_rpc 00:07:41.275 ************************************ 00:07:41.275 14:53:00 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:41.275 14:53:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.275 14:53:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.275 14:53:00 -- common/autotest_common.sh@10 -- # set +x 00:07:41.275 ************************************ 00:07:41.275 START TEST app_cmdline 00:07:41.275 ************************************ 00:07:41.275 14:53:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:41.275 * Looking for test storage... 00:07:41.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:41.276 14:53:00 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:41.276 14:53:00 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3102380 00:07:41.276 14:53:00 -- app/cmdline.sh@18 -- # waitforlisten 3102380 00:07:41.276 14:53:00 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:41.276 14:53:00 -- common/autotest_common.sh@819 -- # '[' -z 3102380 ']' 00:07:41.276 14:53:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.276 14:53:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:41.276 14:53:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.276 14:53:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:41.276 14:53:00 -- common/autotest_common.sh@10 -- # set +x 00:07:41.534 [2024-06-11 14:53:00.154722] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:41.534 [2024-06-11 14:53:00.154786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102380 ] 00:07:41.534 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.534 [2024-06-11 14:53:00.244876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.534 [2024-06-11 14:53:00.330554] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:41.534 [2024-06-11 14:53:00.330709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.470 14:53:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:42.470 14:53:01 -- common/autotest_common.sh@852 -- # return 0 00:07:42.470 14:53:01 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:42.471 { 00:07:42.471 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:07:42.471 "fields": { 00:07:42.471 "major": 24, 00:07:42.471 "minor": 1, 00:07:42.471 "patch": 1, 00:07:42.471 "suffix": "-pre", 00:07:42.471 "commit": "130b9406a" 00:07:42.471 } 00:07:42.471 } 00:07:42.729 14:53:01 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:42.729 14:53:01 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:42.729 14:53:01 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:42.729 14:53:01 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:42.729 14:53:01 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:42.729 14:53:01 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:42.729 14:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.729 14:53:01 -- common/autotest_common.sh@10 -- # set +x 00:07:42.729 14:53:01 -- app/cmdline.sh@26 -- # sort 00:07:42.729 14:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.729 14:53:01 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:42.729 14:53:01 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:42.729 14:53:01 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.729 14:53:01 -- common/autotest_common.sh@640 -- # local es=0 00:07:42.729 14:53:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.729 14:53:01 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.729 14:53:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.729 14:53:01 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.729 14:53:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.729 14:53:01 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.729 14:53:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.729 14:53:01 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.729 14:53:01 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:42.729 14:53:01 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.988 request: 00:07:42.988 { 00:07:42.988 "method": "env_dpdk_get_mem_stats", 00:07:42.988 "req_id": 1 00:07:42.988 } 00:07:42.988 Got JSON-RPC error response 00:07:42.988 response: 00:07:42.988 { 00:07:42.988 "code": -32601, 00:07:42.988 "message": "Method not found" 00:07:42.988 } 00:07:42.988 14:53:01 -- common/autotest_common.sh@643 -- # es=1 00:07:42.988 14:53:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:42.988 14:53:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:42.988 14:53:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:42.988 14:53:01 -- app/cmdline.sh@1 -- # killprocess 3102380 00:07:42.988 14:53:01 -- common/autotest_common.sh@926 -- # '[' -z 3102380 ']' 00:07:42.988 14:53:01 -- common/autotest_common.sh@930 -- # kill -0 3102380 00:07:42.988 14:53:01 -- common/autotest_common.sh@931 -- # uname 00:07:42.988 14:53:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:42.988 14:53:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3102380 00:07:42.988 14:53:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:42.988 14:53:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:42.988 14:53:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3102380' 00:07:42.988 killing process with pid 3102380 00:07:42.988 14:53:01 -- common/autotest_common.sh@945 -- # kill 3102380 00:07:42.988 14:53:01 -- common/autotest_common.sh@950 -- # wait 3102380 00:07:43.246 00:07:43.246 real 0m1.998s 00:07:43.246 user 0m2.556s 00:07:43.246 sys 0m0.466s 00:07:43.246 14:53:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.246 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.246 ************************************ 00:07:43.246 END TEST app_cmdline 00:07:43.246 ************************************ 00:07:43.246 14:53:02 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:43.246 14:53:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.246 14:53:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.246 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.246 ************************************ 00:07:43.246 START TEST version 00:07:43.246 ************************************ 00:07:43.246 14:53:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:43.504 * Looking for test storage... 00:07:43.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:43.504 14:53:02 -- app/version.sh@17 -- # get_header_version major 00:07:43.504 14:53:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.504 14:53:02 -- app/version.sh@14 -- # cut -f2 00:07:43.504 14:53:02 -- app/version.sh@14 -- # tr -d '"' 00:07:43.504 14:53:02 -- app/version.sh@17 -- # major=24 00:07:43.504 14:53:02 -- app/version.sh@18 -- # get_header_version minor 00:07:43.504 14:53:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.504 14:53:02 -- app/version.sh@14 -- # cut -f2 00:07:43.504 14:53:02 -- app/version.sh@14 -- # tr -d '"' 00:07:43.504 14:53:02 -- app/version.sh@18 -- # minor=1 00:07:43.504 14:53:02 -- app/version.sh@19 -- # get_header_version patch 00:07:43.504 14:53:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.504 14:53:02 -- app/version.sh@14 -- # cut -f2 00:07:43.504 14:53:02 -- app/version.sh@14 -- # tr -d '"' 00:07:43.504 14:53:02 -- app/version.sh@19 -- # patch=1 00:07:43.504 14:53:02 -- app/version.sh@20 -- # get_header_version suffix 00:07:43.504 14:53:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.504 14:53:02 -- app/version.sh@14 -- # cut -f2 00:07:43.504 14:53:02 -- app/version.sh@14 -- # tr -d '"' 00:07:43.504 14:53:02 -- app/version.sh@20 -- # suffix=-pre 00:07:43.504 14:53:02 -- app/version.sh@22 -- # version=24.1 00:07:43.504 14:53:02 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:43.504 14:53:02 -- app/version.sh@25 -- # version=24.1.1 00:07:43.504 14:53:02 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:43.504 14:53:02 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:43.504 14:53:02 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:43.504 14:53:02 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:43.504 14:53:02 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:43.504 00:07:43.504 real 0m0.158s 00:07:43.504 user 0m0.090s 00:07:43.504 sys 0m0.104s 00:07:43.504 14:53:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.504 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.504 ************************************ 00:07:43.504 END TEST version 00:07:43.504 ************************************ 00:07:43.504 14:53:02 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:43.504 14:53:02 -- spdk/autotest.sh@204 -- # uname -s 00:07:43.504 14:53:02 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:43.504 14:53:02 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:43.504 14:53:02 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:43.504 14:53:02 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:43.504 14:53:02 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:43.504 14:53:02 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:43.504 14:53:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:43.504 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.504 14:53:02 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:43.504 14:53:02 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:43.504 14:53:02 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:43.504 14:53:02 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:43.504 14:53:02 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:43.504 14:53:02 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:43.504 14:53:02 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:43.504 14:53:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:43.504 14:53:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.504 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.504 ************************************ 00:07:43.504 START TEST nvmf_tcp 00:07:43.504 ************************************ 00:07:43.504 14:53:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:43.763 * Looking for test storage... 00:07:43.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:43.763 14:53:02 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:43.763 14:53:02 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:43.763 14:53:02 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.763 14:53:02 -- nvmf/common.sh@7 -- # uname -s 00:07:43.763 14:53:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.763 14:53:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.763 14:53:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.763 14:53:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.763 14:53:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.763 14:53:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.763 14:53:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.763 14:53:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.763 14:53:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.763 14:53:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.763 14:53:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:43.763 14:53:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:43.763 14:53:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.763 14:53:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.763 14:53:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.763 14:53:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.763 14:53:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.763 14:53:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.763 14:53:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.763 14:53:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.763 14:53:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.763 14:53:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.763 14:53:02 -- paths/export.sh@5 -- # export PATH 00:07:43.763 14:53:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.763 14:53:02 -- nvmf/common.sh@46 -- # : 0 00:07:43.763 14:53:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:43.763 14:53:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:43.763 14:53:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:43.763 14:53:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.763 14:53:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.763 14:53:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:43.763 14:53:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:43.763 14:53:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:43.763 14:53:02 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:43.763 14:53:02 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:43.763 14:53:02 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:43.763 14:53:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:43.763 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.763 14:53:02 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:43.763 14:53:02 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:43.763 14:53:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:43.763 14:53:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.763 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.763 ************************************ 00:07:43.763 START TEST nvmf_example 00:07:43.763 ************************************ 00:07:43.763 14:53:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:43.763 * Looking for test storage... 00:07:43.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.763 14:53:02 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.763 14:53:02 -- nvmf/common.sh@7 -- # uname -s 00:07:43.763 14:53:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.763 14:53:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.763 14:53:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.763 14:53:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.763 14:53:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.763 14:53:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.763 14:53:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.763 14:53:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.763 14:53:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.763 14:53:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.763 14:53:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:43.763 14:53:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:43.763 14:53:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.763 14:53:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.763 14:53:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.763 14:53:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.763 14:53:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.763 14:53:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.763 14:53:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.764 14:53:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.764 14:53:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.764 14:53:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.764 14:53:02 -- paths/export.sh@5 -- # export PATH 00:07:43.764 14:53:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.764 14:53:02 -- nvmf/common.sh@46 -- # : 0 00:07:43.764 14:53:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:43.764 14:53:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:43.764 14:53:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:43.764 14:53:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.764 14:53:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.764 14:53:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:43.764 14:53:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:43.764 14:53:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:43.764 14:53:02 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:43.764 14:53:02 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:43.764 14:53:02 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:43.764 14:53:02 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:43.764 14:53:02 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:43.764 14:53:02 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:43.764 14:53:02 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:43.764 14:53:02 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:43.764 14:53:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:43.764 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.764 14:53:02 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:43.764 14:53:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:43.764 14:53:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.764 14:53:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:43.764 14:53:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:43.764 14:53:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:43.764 14:53:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.764 14:53:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.764 14:53:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.764 14:53:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:43.764 14:53:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:43.764 14:53:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:43.764 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:07:50.331 14:53:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:50.331 14:53:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:50.331 14:53:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:50.331 14:53:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:50.331 14:53:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:50.331 14:53:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:50.331 14:53:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:50.331 14:53:08 -- nvmf/common.sh@294 -- # net_devs=() 00:07:50.331 14:53:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:50.331 14:53:08 -- nvmf/common.sh@295 -- # e810=() 00:07:50.331 14:53:08 -- nvmf/common.sh@295 -- # local -ga e810 00:07:50.331 14:53:08 -- nvmf/common.sh@296 -- # x722=() 00:07:50.331 14:53:08 -- nvmf/common.sh@296 -- # local -ga x722 00:07:50.331 14:53:08 -- nvmf/common.sh@297 -- # mlx=() 00:07:50.331 14:53:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:50.331 14:53:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.331 14:53:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.331 14:53:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.331 14:53:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.331 14:53:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.331 14:53:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.332 14:53:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.332 14:53:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.332 14:53:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.332 14:53:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.332 14:53:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.332 14:53:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:50.332 14:53:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:50.332 14:53:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:50.332 14:53:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:50.332 14:53:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:50.332 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:50.332 14:53:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:50.332 14:53:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:50.332 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:50.332 14:53:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:50.332 14:53:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:50.332 14:53:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.332 14:53:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:50.332 14:53:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.332 14:53:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:50.332 Found net devices under 0000:af:00.0: cvl_0_0 00:07:50.332 14:53:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.332 14:53:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:50.332 14:53:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.332 14:53:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:50.332 14:53:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.332 14:53:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:50.332 Found net devices under 0000:af:00.1: cvl_0_1 00:07:50.332 14:53:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.332 14:53:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:50.332 14:53:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:50.332 14:53:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:50.332 14:53:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:50.332 14:53:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.332 14:53:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.332 14:53:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.332 14:53:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:50.332 14:53:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.332 14:53:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.332 14:53:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:50.332 14:53:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.332 14:53:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.332 14:53:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:50.332 14:53:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:50.332 14:53:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.332 14:53:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.332 14:53:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.332 14:53:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.332 14:53:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:50.332 14:53:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.332 14:53:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.332 14:53:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.332 14:53:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:50.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:07:50.332 00:07:50.332 --- 10.0.0.2 ping statistics --- 00:07:50.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.332 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:50.332 14:53:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:07:50.332 00:07:50.332 --- 10.0.0.1 ping statistics --- 00:07:50.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.332 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:07:50.332 14:53:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.332 14:53:09 -- nvmf/common.sh@410 -- # return 0 00:07:50.332 14:53:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:50.332 14:53:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.332 14:53:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:50.332 14:53:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:50.332 14:53:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.332 14:53:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:50.332 14:53:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:50.332 14:53:09 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:50.332 14:53:09 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:50.332 14:53:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:50.332 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:07:50.332 14:53:09 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:50.332 14:53:09 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:50.332 14:53:09 -- target/nvmf_example.sh@34 -- # nvmfpid=3106505 00:07:50.332 14:53:09 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.332 14:53:09 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:50.332 14:53:09 -- target/nvmf_example.sh@36 -- # waitforlisten 3106505 00:07:50.332 14:53:09 -- common/autotest_common.sh@819 -- # '[' -z 3106505 ']' 00:07:50.332 14:53:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.332 14:53:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:50.332 14:53:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.332 14:53:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:50.332 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:07:50.591 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.529 14:53:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:51.529 14:53:10 -- common/autotest_common.sh@852 -- # return 0 00:07:51.529 14:53:10 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:51.529 14:53:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:51.529 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:07:51.529 14:53:10 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.529 14:53:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.529 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:07:51.529 14:53:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.529 14:53:10 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:51.529 14:53:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.529 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:07:51.529 14:53:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.529 14:53:10 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:51.529 14:53:10 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.529 14:53:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.529 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:07:51.529 14:53:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.529 14:53:10 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:51.529 14:53:10 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:51.529 14:53:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.529 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:07:51.529 14:53:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.529 14:53:10 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.529 14:53:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.529 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:07:51.529 14:53:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.529 14:53:10 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:51.529 14:53:10 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:51.529 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.814 Initializing NVMe Controllers 00:08:03.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:03.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:03.814 Initialization complete. Launching workers. 00:08:03.814 ======================================================== 00:08:03.814 Latency(us) 00:08:03.814 Device Information : IOPS MiB/s Average min max 00:08:03.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14482.53 56.57 4419.40 1015.17 19079.86 00:08:03.814 ======================================================== 00:08:03.815 Total : 14482.53 56.57 4419.40 1015.17 19079.86 00:08:03.815 00:08:03.815 14:53:20 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:03.815 14:53:20 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:03.815 14:53:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:03.815 14:53:20 -- nvmf/common.sh@116 -- # sync 00:08:03.815 14:53:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:03.815 14:53:20 -- nvmf/common.sh@119 -- # set +e 00:08:03.815 14:53:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:03.815 14:53:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:03.815 rmmod nvme_tcp 00:08:03.815 rmmod nvme_fabrics 00:08:03.815 rmmod nvme_keyring 00:08:03.815 14:53:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:03.815 14:53:20 -- nvmf/common.sh@123 -- # set -e 00:08:03.815 14:53:20 -- nvmf/common.sh@124 -- # return 0 00:08:03.815 14:53:20 -- nvmf/common.sh@477 -- # '[' -n 3106505 ']' 00:08:03.815 14:53:20 -- nvmf/common.sh@478 -- # killprocess 3106505 00:08:03.815 14:53:20 -- common/autotest_common.sh@926 -- # '[' -z 3106505 ']' 00:08:03.815 14:53:20 -- common/autotest_common.sh@930 -- # kill -0 3106505 00:08:03.815 14:53:20 -- common/autotest_common.sh@931 -- # uname 00:08:03.815 14:53:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:03.815 14:53:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3106505 00:08:03.815 14:53:20 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:03.815 14:53:20 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:03.815 14:53:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3106505' 00:08:03.815 killing process with pid 3106505 00:08:03.815 14:53:20 -- common/autotest_common.sh@945 -- # kill 3106505 00:08:03.815 14:53:20 -- common/autotest_common.sh@950 -- # wait 3106505 00:08:03.815 nvmf threads initialize successfully 00:08:03.815 bdev subsystem init successfully 00:08:03.815 created a nvmf target service 00:08:03.815 create targets's poll groups done 00:08:03.815 all subsystems of target started 00:08:03.815 nvmf target is running 00:08:03.815 all subsystems of target stopped 00:08:03.815 destroy targets's poll groups done 00:08:03.815 destroyed the nvmf target service 00:08:03.815 bdev subsystem finish successfully 00:08:03.815 nvmf threads destroy successfully 00:08:03.815 14:53:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:03.815 14:53:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:03.815 14:53:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:03.815 14:53:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.815 14:53:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:03.815 14:53:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.815 14:53:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.815 14:53:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.075 14:53:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:04.075 14:53:22 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:04.075 14:53:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:04.075 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.335 00:08:04.335 real 0m20.495s 00:08:04.335 user 0m46.954s 00:08:04.335 sys 0m6.339s 00:08:04.335 14:53:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.335 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.335 ************************************ 00:08:04.335 END TEST nvmf_example 00:08:04.335 ************************************ 00:08:04.335 14:53:22 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:04.335 14:53:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:04.335 14:53:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.335 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.335 ************************************ 00:08:04.335 START TEST nvmf_filesystem 00:08:04.335 ************************************ 00:08:04.335 14:53:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:04.335 * Looking for test storage... 00:08:04.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.335 14:53:23 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:04.336 14:53:23 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:04.336 14:53:23 -- common/autotest_common.sh@34 -- # set -e 00:08:04.336 14:53:23 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:04.336 14:53:23 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:04.336 14:53:23 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:04.336 14:53:23 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:04.336 14:53:23 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:04.336 14:53:23 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:04.336 14:53:23 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:04.336 14:53:23 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:04.336 14:53:23 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:04.336 14:53:23 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:04.336 14:53:23 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:04.336 14:53:23 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:04.336 14:53:23 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:04.336 14:53:23 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:04.336 14:53:23 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:04.336 14:53:23 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:04.336 14:53:23 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:04.336 14:53:23 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:04.336 14:53:23 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:04.336 14:53:23 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:04.336 14:53:23 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:04.336 14:53:23 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:04.336 14:53:23 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:04.336 14:53:23 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:04.336 14:53:23 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:04.336 14:53:23 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:04.336 14:53:23 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:04.336 14:53:23 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:04.336 14:53:23 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:04.336 14:53:23 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:04.336 14:53:23 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:04.336 14:53:23 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:04.336 14:53:23 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:04.336 14:53:23 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:04.336 14:53:23 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:04.336 14:53:23 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:04.336 14:53:23 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:04.336 14:53:23 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:04.336 14:53:23 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:04.336 14:53:23 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:04.336 14:53:23 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:04.336 14:53:23 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:04.336 14:53:23 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:04.336 14:53:23 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:04.336 14:53:23 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:04.336 14:53:23 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:04.336 14:53:23 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:04.336 14:53:23 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:04.336 14:53:23 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:04.336 14:53:23 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:04.336 14:53:23 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:04.336 14:53:23 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:04.336 14:53:23 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:04.336 14:53:23 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:04.336 14:53:23 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:04.336 14:53:23 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:04.336 14:53:23 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:04.336 14:53:23 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:04.336 14:53:23 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:04.336 14:53:23 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:04.336 14:53:23 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:04.336 14:53:23 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:04.336 14:53:23 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:04.336 14:53:23 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:04.336 14:53:23 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:04.336 14:53:23 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:04.336 14:53:23 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:04.336 14:53:23 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:04.336 14:53:23 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:04.336 14:53:23 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:04.336 14:53:23 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:04.336 14:53:23 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:04.336 14:53:23 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:04.336 14:53:23 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:04.336 14:53:23 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:04.336 14:53:23 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:04.336 14:53:23 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:04.336 14:53:23 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:04.336 14:53:23 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:04.336 14:53:23 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:04.336 14:53:23 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:04.336 14:53:23 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:04.336 14:53:23 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:04.336 14:53:23 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:04.336 14:53:23 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:04.336 14:53:23 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:04.336 14:53:23 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:04.336 14:53:23 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:04.336 14:53:23 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:04.336 14:53:23 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:04.336 14:53:23 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:04.336 14:53:23 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:04.336 14:53:23 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:04.336 14:53:23 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:04.336 14:53:23 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:04.336 14:53:23 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:04.336 14:53:23 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:04.336 14:53:23 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:04.336 14:53:23 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:04.336 #define SPDK_CONFIG_H 00:08:04.336 #define SPDK_CONFIG_APPS 1 00:08:04.336 #define SPDK_CONFIG_ARCH native 00:08:04.336 #undef SPDK_CONFIG_ASAN 00:08:04.336 #undef SPDK_CONFIG_AVAHI 00:08:04.336 #undef SPDK_CONFIG_CET 00:08:04.336 #define SPDK_CONFIG_COVERAGE 1 00:08:04.336 #define SPDK_CONFIG_CROSS_PREFIX 00:08:04.336 #undef SPDK_CONFIG_CRYPTO 00:08:04.336 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:04.336 #undef SPDK_CONFIG_CUSTOMOCF 00:08:04.336 #undef SPDK_CONFIG_DAOS 00:08:04.336 #define SPDK_CONFIG_DAOS_DIR 00:08:04.336 #define SPDK_CONFIG_DEBUG 1 00:08:04.336 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:04.336 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:04.336 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:04.336 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:04.336 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:04.336 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:04.336 #define SPDK_CONFIG_EXAMPLES 1 00:08:04.336 #undef SPDK_CONFIG_FC 00:08:04.336 #define SPDK_CONFIG_FC_PATH 00:08:04.336 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:04.336 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:04.336 #undef SPDK_CONFIG_FUSE 00:08:04.336 #undef SPDK_CONFIG_FUZZER 00:08:04.336 #define SPDK_CONFIG_FUZZER_LIB 00:08:04.336 #undef SPDK_CONFIG_GOLANG 00:08:04.336 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:04.336 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:04.336 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:04.336 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:04.336 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:04.336 #define SPDK_CONFIG_IDXD 1 00:08:04.336 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:04.336 #undef SPDK_CONFIG_IPSEC_MB 00:08:04.336 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:04.336 #define SPDK_CONFIG_ISAL 1 00:08:04.336 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:04.336 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:04.336 #define SPDK_CONFIG_LIBDIR 00:08:04.336 #undef SPDK_CONFIG_LTO 00:08:04.336 #define SPDK_CONFIG_MAX_LCORES 00:08:04.336 #define SPDK_CONFIG_NVME_CUSE 1 00:08:04.336 #undef SPDK_CONFIG_OCF 00:08:04.336 #define SPDK_CONFIG_OCF_PATH 00:08:04.336 #define SPDK_CONFIG_OPENSSL_PATH 00:08:04.336 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:04.336 #undef SPDK_CONFIG_PGO_USE 00:08:04.336 #define SPDK_CONFIG_PREFIX /usr/local 00:08:04.336 #undef SPDK_CONFIG_RAID5F 00:08:04.336 #undef SPDK_CONFIG_RBD 00:08:04.336 #define SPDK_CONFIG_RDMA 1 00:08:04.336 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:04.336 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:04.336 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:04.336 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:04.336 #define SPDK_CONFIG_SHARED 1 00:08:04.336 #undef SPDK_CONFIG_SMA 00:08:04.336 #define SPDK_CONFIG_TESTS 1 00:08:04.336 #undef SPDK_CONFIG_TSAN 00:08:04.336 #define SPDK_CONFIG_UBLK 1 00:08:04.337 #define SPDK_CONFIG_UBSAN 1 00:08:04.337 #undef SPDK_CONFIG_UNIT_TESTS 00:08:04.337 #undef SPDK_CONFIG_URING 00:08:04.337 #define SPDK_CONFIG_URING_PATH 00:08:04.337 #undef SPDK_CONFIG_URING_ZNS 00:08:04.337 #undef SPDK_CONFIG_USDT 00:08:04.337 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:04.337 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:04.337 #undef SPDK_CONFIG_VFIO_USER 00:08:04.337 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:04.337 #define SPDK_CONFIG_VHOST 1 00:08:04.337 #define SPDK_CONFIG_VIRTIO 1 00:08:04.337 #undef SPDK_CONFIG_VTUNE 00:08:04.337 #define SPDK_CONFIG_VTUNE_DIR 00:08:04.337 #define SPDK_CONFIG_WERROR 1 00:08:04.337 #define SPDK_CONFIG_WPDK_DIR 00:08:04.337 #undef SPDK_CONFIG_XNVME 00:08:04.337 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:04.337 14:53:23 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:04.337 14:53:23 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.337 14:53:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.337 14:53:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.337 14:53:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.337 14:53:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.337 14:53:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.337 14:53:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.337 14:53:23 -- paths/export.sh@5 -- # export PATH 00:08:04.337 14:53:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.337 14:53:23 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:04.337 14:53:23 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:04.337 14:53:23 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:04.337 14:53:23 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:04.337 14:53:23 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:04.337 14:53:23 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:04.337 14:53:23 -- pm/common@16 -- # TEST_TAG=N/A 00:08:04.337 14:53:23 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:04.337 14:53:23 -- common/autotest_common.sh@52 -- # : 1 00:08:04.337 14:53:23 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:04.337 14:53:23 -- common/autotest_common.sh@56 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:04.337 14:53:23 -- common/autotest_common.sh@58 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:04.337 14:53:23 -- common/autotest_common.sh@60 -- # : 1 00:08:04.337 14:53:23 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:04.337 14:53:23 -- common/autotest_common.sh@62 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:04.337 14:53:23 -- common/autotest_common.sh@64 -- # : 00:08:04.337 14:53:23 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:04.337 14:53:23 -- common/autotest_common.sh@66 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:04.337 14:53:23 -- common/autotest_common.sh@68 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:04.337 14:53:23 -- common/autotest_common.sh@70 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:04.337 14:53:23 -- common/autotest_common.sh@72 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:04.337 14:53:23 -- common/autotest_common.sh@74 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:04.337 14:53:23 -- common/autotest_common.sh@76 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:04.337 14:53:23 -- common/autotest_common.sh@78 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:04.337 14:53:23 -- common/autotest_common.sh@80 -- # : 1 00:08:04.337 14:53:23 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:04.337 14:53:23 -- common/autotest_common.sh@82 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:04.337 14:53:23 -- common/autotest_common.sh@84 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:04.337 14:53:23 -- common/autotest_common.sh@86 -- # : 1 00:08:04.337 14:53:23 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:04.337 14:53:23 -- common/autotest_common.sh@88 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:04.337 14:53:23 -- common/autotest_common.sh@90 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:04.337 14:53:23 -- common/autotest_common.sh@92 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:04.337 14:53:23 -- common/autotest_common.sh@94 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:04.337 14:53:23 -- common/autotest_common.sh@96 -- # : tcp 00:08:04.337 14:53:23 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:04.337 14:53:23 -- common/autotest_common.sh@98 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:04.337 14:53:23 -- common/autotest_common.sh@100 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:04.337 14:53:23 -- common/autotest_common.sh@102 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:04.337 14:53:23 -- common/autotest_common.sh@104 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:04.337 14:53:23 -- common/autotest_common.sh@106 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:04.337 14:53:23 -- common/autotest_common.sh@108 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:04.337 14:53:23 -- common/autotest_common.sh@110 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:04.337 14:53:23 -- common/autotest_common.sh@112 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:04.337 14:53:23 -- common/autotest_common.sh@114 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:04.337 14:53:23 -- common/autotest_common.sh@116 -- # : 1 00:08:04.337 14:53:23 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:04.337 14:53:23 -- common/autotest_common.sh@118 -- # : 00:08:04.337 14:53:23 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:04.337 14:53:23 -- common/autotest_common.sh@120 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:04.337 14:53:23 -- common/autotest_common.sh@122 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:04.337 14:53:23 -- common/autotest_common.sh@124 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:04.337 14:53:23 -- common/autotest_common.sh@126 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:04.337 14:53:23 -- common/autotest_common.sh@128 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:04.337 14:53:23 -- common/autotest_common.sh@130 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:04.337 14:53:23 -- common/autotest_common.sh@132 -- # : 00:08:04.337 14:53:23 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:04.337 14:53:23 -- common/autotest_common.sh@134 -- # : true 00:08:04.337 14:53:23 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:04.337 14:53:23 -- common/autotest_common.sh@136 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:04.337 14:53:23 -- common/autotest_common.sh@138 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:04.337 14:53:23 -- common/autotest_common.sh@140 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:04.337 14:53:23 -- common/autotest_common.sh@142 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:04.337 14:53:23 -- common/autotest_common.sh@144 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:04.337 14:53:23 -- common/autotest_common.sh@146 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:04.337 14:53:23 -- common/autotest_common.sh@148 -- # : e810 00:08:04.337 14:53:23 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:04.337 14:53:23 -- common/autotest_common.sh@150 -- # : 0 00:08:04.337 14:53:23 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:04.337 14:53:23 -- common/autotest_common.sh@152 -- # : 0 00:08:04.338 14:53:23 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:04.338 14:53:23 -- common/autotest_common.sh@154 -- # : 0 00:08:04.338 14:53:23 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:04.338 14:53:23 -- common/autotest_common.sh@156 -- # : 0 00:08:04.338 14:53:23 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:04.338 14:53:23 -- common/autotest_common.sh@158 -- # : 0 00:08:04.338 14:53:23 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:04.338 14:53:23 -- common/autotest_common.sh@160 -- # : 0 00:08:04.338 14:53:23 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:04.338 14:53:23 -- common/autotest_common.sh@163 -- # : 00:08:04.338 14:53:23 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:04.338 14:53:23 -- common/autotest_common.sh@165 -- # : 0 00:08:04.338 14:53:23 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:04.338 14:53:23 -- common/autotest_common.sh@167 -- # : 0 00:08:04.338 14:53:23 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:04.338 14:53:23 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:04.338 14:53:23 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:04.338 14:53:23 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:04.338 14:53:23 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:04.338 14:53:23 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:04.338 14:53:23 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:04.338 14:53:23 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:04.338 14:53:23 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:04.338 14:53:23 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:04.338 14:53:23 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:04.338 14:53:23 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:04.338 14:53:23 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:04.338 14:53:23 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:04.338 14:53:23 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:04.338 14:53:23 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:04.338 14:53:23 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:04.338 14:53:23 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:04.338 14:53:23 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:04.338 14:53:23 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:04.338 14:53:23 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:04.338 14:53:23 -- common/autotest_common.sh@196 -- # cat 00:08:04.338 14:53:23 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:04.338 14:53:23 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:04.338 14:53:23 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:04.338 14:53:23 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:04.338 14:53:23 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:04.338 14:53:23 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:04.338 14:53:23 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:04.338 14:53:23 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:04.338 14:53:23 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:04.338 14:53:23 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:04.338 14:53:23 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:04.338 14:53:23 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:04.338 14:53:23 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:04.338 14:53:23 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:04.338 14:53:23 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:04.338 14:53:23 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:04.338 14:53:23 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:04.338 14:53:23 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:04.338 14:53:23 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:04.338 14:53:23 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:04.338 14:53:23 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:04.338 14:53:23 -- common/autotest_common.sh@249 -- # valgrind= 00:08:04.338 14:53:23 -- common/autotest_common.sh@255 -- # uname -s 00:08:04.338 14:53:23 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:04.338 14:53:23 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:04.338 14:53:23 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:04.338 14:53:23 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:04.338 14:53:23 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:04.338 14:53:23 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:04.338 14:53:23 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:04.338 14:53:23 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j112 00:08:04.338 14:53:23 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:04.338 14:53:23 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:04.338 14:53:23 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:04.338 14:53:23 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:04.338 14:53:23 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:04.338 14:53:23 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:04.338 14:53:23 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:04.338 14:53:23 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:04.338 14:53:23 -- common/autotest_common.sh@309 -- # [[ -z 3109139 ]] 00:08:04.338 14:53:23 -- common/autotest_common.sh@309 -- # kill -0 3109139 00:08:04.338 14:53:23 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:04.338 14:53:23 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:04.338 14:53:23 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:04.338 14:53:23 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:04.338 14:53:23 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:04.338 14:53:23 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:04.338 14:53:23 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:04.338 14:53:23 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:04.338 14:53:23 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.ccsLkV 00:08:04.338 14:53:23 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:04.338 14:53:23 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:04.338 14:53:23 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:04.338 14:53:23 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ccsLkV/tests/target /tmp/spdk.ccsLkV 00:08:04.338 14:53:23 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:04.338 14:53:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.338 14:53:23 -- common/autotest_common.sh@318 -- # df -T 00:08:04.338 14:53:23 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:04.338 14:53:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:04.338 14:53:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:04.338 14:53:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:04.338 14:53:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:04.338 14:53:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:04.338 14:53:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.338 14:53:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:04.338 14:53:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:04.338 14:53:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=956715008 00:08:04.338 14:53:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:04.338 14:53:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=4327714816 00:08:04.338 14:53:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.338 14:53:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:04.338 14:53:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:04.338 14:53:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=83688099840 00:08:04.338 14:53:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94501429248 00:08:04.339 14:53:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=10813329408 00:08:04.339 14:53:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.339 14:53:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:04.339 14:53:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:04.339 14:53:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=47197196288 00:08:04.339 14:53:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47250714624 00:08:04.598 14:53:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:08:04.598 14:53:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.598 14:53:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:04.598 14:53:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:04.598 14:53:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=18890653696 00:08:04.598 14:53:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18900287488 00:08:04.598 14:53:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=9633792 00:08:04.598 14:53:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.598 14:53:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:04.598 14:53:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:04.598 14:53:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=47249244160 00:08:04.598 14:53:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47250714624 00:08:04.598 14:53:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=1470464 00:08:04.598 14:53:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.598 14:53:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:04.598 14:53:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:04.598 14:53:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450135552 00:08:04.598 14:53:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450139648 00:08:04.598 14:53:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:04.598 14:53:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.598 14:53:23 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:04.598 * Looking for test storage... 00:08:04.598 14:53:23 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:04.598 14:53:23 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:04.598 14:53:23 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.598 14:53:23 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:04.598 14:53:23 -- common/autotest_common.sh@363 -- # mount=/ 00:08:04.598 14:53:23 -- common/autotest_common.sh@365 -- # target_space=83688099840 00:08:04.598 14:53:23 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:04.598 14:53:23 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:04.598 14:53:23 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:04.598 14:53:23 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:04.598 14:53:23 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:04.598 14:53:23 -- common/autotest_common.sh@372 -- # new_size=13027921920 00:08:04.598 14:53:23 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:04.598 14:53:23 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.598 14:53:23 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.598 14:53:23 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.598 14:53:23 -- common/autotest_common.sh@380 -- # return 0 00:08:04.598 14:53:23 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:04.598 14:53:23 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:04.598 14:53:23 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:04.598 14:53:23 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:04.598 14:53:23 -- common/autotest_common.sh@1672 -- # true 00:08:04.598 14:53:23 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:04.598 14:53:23 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:04.598 14:53:23 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:04.598 14:53:23 -- common/autotest_common.sh@27 -- # exec 00:08:04.598 14:53:23 -- common/autotest_common.sh@29 -- # exec 00:08:04.598 14:53:23 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:04.598 14:53:23 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:04.598 14:53:23 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:04.598 14:53:23 -- common/autotest_common.sh@18 -- # set -x 00:08:04.598 14:53:23 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.598 14:53:23 -- nvmf/common.sh@7 -- # uname -s 00:08:04.598 14:53:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.598 14:53:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.598 14:53:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.598 14:53:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.598 14:53:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.598 14:53:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.598 14:53:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.598 14:53:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.598 14:53:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.598 14:53:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.598 14:53:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:04.598 14:53:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:04.598 14:53:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.598 14:53:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.598 14:53:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.598 14:53:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.598 14:53:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.598 14:53:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.598 14:53:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.598 14:53:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.598 14:53:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.598 14:53:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.598 14:53:23 -- paths/export.sh@5 -- # export PATH 00:08:04.598 14:53:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.598 14:53:23 -- nvmf/common.sh@46 -- # : 0 00:08:04.598 14:53:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:04.598 14:53:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:04.598 14:53:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:04.598 14:53:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.598 14:53:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.598 14:53:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:04.598 14:53:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:04.598 14:53:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:04.598 14:53:23 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:04.598 14:53:23 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:04.598 14:53:23 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:04.598 14:53:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:04.599 14:53:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.599 14:53:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:04.599 14:53:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:04.599 14:53:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:04.599 14:53:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.599 14:53:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.599 14:53:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.599 14:53:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:04.599 14:53:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:04.599 14:53:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:04.599 14:53:23 -- common/autotest_common.sh@10 -- # set +x 00:08:11.163 14:53:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:11.163 14:53:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:11.163 14:53:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:11.163 14:53:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:11.163 14:53:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:11.163 14:53:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:11.163 14:53:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:11.163 14:53:29 -- nvmf/common.sh@294 -- # net_devs=() 00:08:11.163 14:53:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:11.163 14:53:29 -- nvmf/common.sh@295 -- # e810=() 00:08:11.163 14:53:29 -- nvmf/common.sh@295 -- # local -ga e810 00:08:11.163 14:53:29 -- nvmf/common.sh@296 -- # x722=() 00:08:11.163 14:53:29 -- nvmf/common.sh@296 -- # local -ga x722 00:08:11.163 14:53:29 -- nvmf/common.sh@297 -- # mlx=() 00:08:11.163 14:53:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:11.163 14:53:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.163 14:53:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:11.163 14:53:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:11.163 14:53:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:11.163 14:53:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:11.163 14:53:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:11.163 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:11.163 14:53:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:11.163 14:53:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:11.163 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:11.163 14:53:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:11.163 14:53:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:11.163 14:53:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.163 14:53:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:11.163 14:53:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.163 14:53:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:11.163 Found net devices under 0000:af:00.0: cvl_0_0 00:08:11.163 14:53:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.163 14:53:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:11.163 14:53:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.163 14:53:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:11.163 14:53:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.163 14:53:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:11.163 Found net devices under 0000:af:00.1: cvl_0_1 00:08:11.163 14:53:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.163 14:53:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:11.163 14:53:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:11.163 14:53:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:11.163 14:53:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:11.163 14:53:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.163 14:53:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.163 14:53:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.163 14:53:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:11.163 14:53:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.163 14:53:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.163 14:53:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:11.163 14:53:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.163 14:53:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.163 14:53:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:11.163 14:53:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:11.163 14:53:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.164 14:53:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.164 14:53:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.164 14:53:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.164 14:53:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:11.164 14:53:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.164 14:53:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.164 14:53:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.164 14:53:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:11.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:08:11.164 00:08:11.164 --- 10.0.0.2 ping statistics --- 00:08:11.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.164 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:08:11.164 14:53:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:08:11.164 00:08:11.164 --- 10.0.0.1 ping statistics --- 00:08:11.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.164 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:11.164 14:53:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.164 14:53:29 -- nvmf/common.sh@410 -- # return 0 00:08:11.164 14:53:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:11.164 14:53:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.164 14:53:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:11.164 14:53:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:11.164 14:53:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.164 14:53:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:11.164 14:53:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:11.164 14:53:29 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:11.164 14:53:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:11.164 14:53:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.164 14:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:11.164 ************************************ 00:08:11.164 START TEST nvmf_filesystem_no_in_capsule 00:08:11.164 ************************************ 00:08:11.164 14:53:29 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:11.164 14:53:29 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:11.164 14:53:29 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:11.164 14:53:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:11.164 14:53:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:11.164 14:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:11.164 14:53:29 -- nvmf/common.sh@469 -- # nvmfpid=3112746 00:08:11.164 14:53:29 -- nvmf/common.sh@470 -- # waitforlisten 3112746 00:08:11.164 14:53:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.164 14:53:29 -- common/autotest_common.sh@819 -- # '[' -z 3112746 ']' 00:08:11.164 14:53:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.164 14:53:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:11.164 14:53:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.164 14:53:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:11.164 14:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:11.164 [2024-06-11 14:53:29.980486] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:11.164 [2024-06-11 14:53:29.980541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.422 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.422 [2024-06-11 14:53:30.079259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.422 [2024-06-11 14:53:30.166363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.422 [2024-06-11 14:53:30.166506] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.422 [2024-06-11 14:53:30.166517] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.422 [2024-06-11 14:53:30.166527] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.422 [2024-06-11 14:53:30.166578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.422 [2024-06-11 14:53:30.166678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.422 [2024-06-11 14:53:30.166766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.422 [2024-06-11 14:53:30.166766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.358 14:53:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:12.358 14:53:30 -- common/autotest_common.sh@852 -- # return 0 00:08:12.358 14:53:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:12.358 14:53:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:12.358 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:08:12.358 14:53:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.358 14:53:30 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:12.358 14:53:30 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:12.358 14:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.358 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:08:12.358 [2024-06-11 14:53:30.877456] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.358 14:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.358 14:53:30 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:12.358 14:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.358 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:08:12.358 Malloc1 00:08:12.358 14:53:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.358 14:53:31 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:12.358 14:53:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.358 14:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:12.358 14:53:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.358 14:53:31 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.358 14:53:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.358 14:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:12.358 14:53:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.358 14:53:31 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.358 14:53:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.358 14:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:12.358 [2024-06-11 14:53:31.031472] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.358 14:53:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.358 14:53:31 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:12.358 14:53:31 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:12.358 14:53:31 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:12.358 14:53:31 -- common/autotest_common.sh@1359 -- # local bs 00:08:12.358 14:53:31 -- common/autotest_common.sh@1360 -- # local nb 00:08:12.358 14:53:31 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:12.358 14:53:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.358 14:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:12.358 14:53:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.358 14:53:31 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:12.358 { 00:08:12.358 "name": "Malloc1", 00:08:12.358 "aliases": [ 00:08:12.358 "7820e932-9a9a-4ad3-9f99-c8224bd6b26b" 00:08:12.358 ], 00:08:12.358 "product_name": "Malloc disk", 00:08:12.358 "block_size": 512, 00:08:12.358 "num_blocks": 1048576, 00:08:12.358 "uuid": "7820e932-9a9a-4ad3-9f99-c8224bd6b26b", 00:08:12.358 "assigned_rate_limits": { 00:08:12.358 "rw_ios_per_sec": 0, 00:08:12.358 "rw_mbytes_per_sec": 0, 00:08:12.358 "r_mbytes_per_sec": 0, 00:08:12.358 "w_mbytes_per_sec": 0 00:08:12.358 }, 00:08:12.358 "claimed": true, 00:08:12.358 "claim_type": "exclusive_write", 00:08:12.358 "zoned": false, 00:08:12.358 "supported_io_types": { 00:08:12.358 "read": true, 00:08:12.358 "write": true, 00:08:12.358 "unmap": true, 00:08:12.358 "write_zeroes": true, 00:08:12.358 "flush": true, 00:08:12.358 "reset": true, 00:08:12.358 "compare": false, 00:08:12.358 "compare_and_write": false, 00:08:12.358 "abort": true, 00:08:12.358 "nvme_admin": false, 00:08:12.358 "nvme_io": false 00:08:12.358 }, 00:08:12.358 "memory_domains": [ 00:08:12.358 { 00:08:12.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.358 "dma_device_type": 2 00:08:12.358 } 00:08:12.358 ], 00:08:12.358 "driver_specific": {} 00:08:12.358 } 00:08:12.358 ]' 00:08:12.358 14:53:31 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:12.358 14:53:31 -- common/autotest_common.sh@1362 -- # bs=512 00:08:12.358 14:53:31 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:12.358 14:53:31 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:12.358 14:53:31 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:12.358 14:53:31 -- common/autotest_common.sh@1367 -- # echo 512 00:08:12.358 14:53:31 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:12.358 14:53:31 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.735 14:53:32 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.736 14:53:32 -- common/autotest_common.sh@1177 -- # local i=0 00:08:13.736 14:53:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.736 14:53:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:13.736 14:53:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:16.270 14:53:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:16.270 14:53:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:16.270 14:53:34 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:16.270 14:53:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:16.270 14:53:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:16.270 14:53:34 -- common/autotest_common.sh@1187 -- # return 0 00:08:16.270 14:53:34 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:16.270 14:53:34 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:16.270 14:53:34 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:16.270 14:53:34 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:16.270 14:53:34 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:16.270 14:53:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:16.270 14:53:34 -- setup/common.sh@80 -- # echo 536870912 00:08:16.270 14:53:34 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:16.270 14:53:34 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:16.270 14:53:34 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:16.270 14:53:34 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:16.270 14:53:34 -- target/filesystem.sh@69 -- # partprobe 00:08:16.837 14:53:35 -- target/filesystem.sh@70 -- # sleep 1 00:08:17.774 14:53:36 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:17.774 14:53:36 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:17.774 14:53:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:17.774 14:53:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.774 14:53:36 -- common/autotest_common.sh@10 -- # set +x 00:08:17.774 ************************************ 00:08:17.774 START TEST filesystem_ext4 00:08:17.774 ************************************ 00:08:17.774 14:53:36 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:17.774 14:53:36 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:17.774 14:53:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.774 14:53:36 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:17.774 14:53:36 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:17.774 14:53:36 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:17.774 14:53:36 -- common/autotest_common.sh@904 -- # local i=0 00:08:17.774 14:53:36 -- common/autotest_common.sh@905 -- # local force 00:08:17.774 14:53:36 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:17.774 14:53:36 -- common/autotest_common.sh@908 -- # force=-F 00:08:17.774 14:53:36 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:18.033 mke2fs 1.46.5 (30-Dec-2021) 00:08:18.033 Discarding device blocks: 0/522240 done 00:08:18.033 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:18.033 Filesystem UUID: 5bcba67d-1565-479d-b4fe-d36a059ac2be 00:08:18.033 Superblock backups stored on blocks: 00:08:18.033 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:18.033 00:08:18.033 Allocating group tables: 0/64 done 00:08:18.033 Writing inode tables: 0/64 done 00:08:19.412 Creating journal (8192 blocks): done 00:08:19.980 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:08:19.980 00:08:19.980 14:53:38 -- common/autotest_common.sh@921 -- # return 0 00:08:19.980 14:53:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.547 14:53:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.806 14:53:39 -- target/filesystem.sh@25 -- # sync 00:08:20.806 14:53:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.806 14:53:39 -- target/filesystem.sh@27 -- # sync 00:08:20.806 14:53:39 -- target/filesystem.sh@29 -- # i=0 00:08:20.806 14:53:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.806 14:53:39 -- target/filesystem.sh@37 -- # kill -0 3112746 00:08:20.806 14:53:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.806 14:53:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.806 14:53:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.806 14:53:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.806 00:08:20.806 real 0m2.878s 00:08:20.806 user 0m0.026s 00:08:20.806 sys 0m0.066s 00:08:20.806 14:53:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.806 14:53:39 -- common/autotest_common.sh@10 -- # set +x 00:08:20.806 ************************************ 00:08:20.806 END TEST filesystem_ext4 00:08:20.806 ************************************ 00:08:20.806 14:53:39 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:20.806 14:53:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:20.806 14:53:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.806 14:53:39 -- common/autotest_common.sh@10 -- # set +x 00:08:20.806 ************************************ 00:08:20.806 START TEST filesystem_btrfs 00:08:20.806 ************************************ 00:08:20.807 14:53:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:20.807 14:53:39 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:20.807 14:53:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.807 14:53:39 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:20.807 14:53:39 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:20.807 14:53:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:20.807 14:53:39 -- common/autotest_common.sh@904 -- # local i=0 00:08:20.807 14:53:39 -- common/autotest_common.sh@905 -- # local force 00:08:20.807 14:53:39 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:20.807 14:53:39 -- common/autotest_common.sh@910 -- # force=-f 00:08:20.807 14:53:39 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:21.065 btrfs-progs v6.6.2 00:08:21.065 See https://btrfs.readthedocs.io for more information. 00:08:21.065 00:08:21.065 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:21.065 NOTE: several default settings have changed in version 5.15, please make sure 00:08:21.065 this does not affect your deployments: 00:08:21.065 - DUP for metadata (-m dup) 00:08:21.065 - enabled no-holes (-O no-holes) 00:08:21.065 - enabled free-space-tree (-R free-space-tree) 00:08:21.065 00:08:21.065 Label: (null) 00:08:21.065 UUID: 469d3641-b1f1-4be5-a6d0-537d0c66f3df 00:08:21.065 Node size: 16384 00:08:21.065 Sector size: 4096 00:08:21.065 Filesystem size: 510.00MiB 00:08:21.065 Block group profiles: 00:08:21.065 Data: single 8.00MiB 00:08:21.065 Metadata: DUP 32.00MiB 00:08:21.065 System: DUP 8.00MiB 00:08:21.065 SSD detected: yes 00:08:21.065 Zoned device: no 00:08:21.065 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:21.065 Runtime features: free-space-tree 00:08:21.065 Checksum: crc32c 00:08:21.065 Number of devices: 1 00:08:21.065 Devices: 00:08:21.065 ID SIZE PATH 00:08:21.065 1 510.00MiB /dev/nvme0n1p1 00:08:21.065 00:08:21.065 14:53:39 -- common/autotest_common.sh@921 -- # return 0 00:08:21.065 14:53:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.325 14:53:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.325 14:53:40 -- target/filesystem.sh@25 -- # sync 00:08:21.325 14:53:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.325 14:53:40 -- target/filesystem.sh@27 -- # sync 00:08:21.325 14:53:40 -- target/filesystem.sh@29 -- # i=0 00:08:21.325 14:53:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.325 14:53:40 -- target/filesystem.sh@37 -- # kill -0 3112746 00:08:21.325 14:53:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.325 14:53:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.325 14:53:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.325 14:53:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.325 00:08:21.325 real 0m0.588s 00:08:21.325 user 0m0.028s 00:08:21.325 sys 0m0.124s 00:08:21.325 14:53:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.325 14:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:21.325 ************************************ 00:08:21.325 END TEST filesystem_btrfs 00:08:21.325 ************************************ 00:08:21.325 14:53:40 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:21.325 14:53:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:21.325 14:53:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.325 14:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:21.325 ************************************ 00:08:21.325 START TEST filesystem_xfs 00:08:21.325 ************************************ 00:08:21.325 14:53:40 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:21.325 14:53:40 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:21.325 14:53:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.325 14:53:40 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:21.325 14:53:40 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:21.325 14:53:40 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:21.325 14:53:40 -- common/autotest_common.sh@904 -- # local i=0 00:08:21.325 14:53:40 -- common/autotest_common.sh@905 -- # local force 00:08:21.325 14:53:40 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:21.325 14:53:40 -- common/autotest_common.sh@910 -- # force=-f 00:08:21.325 14:53:40 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:21.584 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:21.584 = sectsz=512 attr=2, projid32bit=1 00:08:21.584 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:21.584 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:21.584 data = bsize=4096 blocks=130560, imaxpct=25 00:08:21.584 = sunit=0 swidth=0 blks 00:08:21.584 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:21.584 log =internal log bsize=4096 blocks=16384, version=2 00:08:21.584 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:21.584 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:22.521 Discarding blocks...Done. 00:08:22.521 14:53:41 -- common/autotest_common.sh@921 -- # return 0 00:08:22.521 14:53:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:24.432 14:53:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:24.432 14:53:43 -- target/filesystem.sh@25 -- # sync 00:08:24.432 14:53:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:24.432 14:53:43 -- target/filesystem.sh@27 -- # sync 00:08:24.432 14:53:43 -- target/filesystem.sh@29 -- # i=0 00:08:24.432 14:53:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:24.692 14:53:43 -- target/filesystem.sh@37 -- # kill -0 3112746 00:08:24.692 14:53:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:24.692 14:53:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:24.692 14:53:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:24.692 14:53:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:24.692 00:08:24.692 real 0m3.156s 00:08:24.692 user 0m0.023s 00:08:24.692 sys 0m0.073s 00:08:24.692 14:53:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.692 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 ************************************ 00:08:24.692 END TEST filesystem_xfs 00:08:24.692 ************************************ 00:08:24.692 14:53:43 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:24.692 14:53:43 -- target/filesystem.sh@93 -- # sync 00:08:24.692 14:53:43 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:24.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.692 14:53:43 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:24.692 14:53:43 -- common/autotest_common.sh@1198 -- # local i=0 00:08:24.692 14:53:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:24.692 14:53:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.692 14:53:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:24.692 14:53:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.951 14:53:43 -- common/autotest_common.sh@1210 -- # return 0 00:08:24.951 14:53:43 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.951 14:53:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.951 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:08:24.951 14:53:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.951 14:53:43 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:24.951 14:53:43 -- target/filesystem.sh@101 -- # killprocess 3112746 00:08:24.951 14:53:43 -- common/autotest_common.sh@926 -- # '[' -z 3112746 ']' 00:08:24.951 14:53:43 -- common/autotest_common.sh@930 -- # kill -0 3112746 00:08:24.951 14:53:43 -- common/autotest_common.sh@931 -- # uname 00:08:24.951 14:53:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:24.951 14:53:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3112746 00:08:24.951 14:53:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:24.951 14:53:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:24.951 14:53:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3112746' 00:08:24.951 killing process with pid 3112746 00:08:24.951 14:53:43 -- common/autotest_common.sh@945 -- # kill 3112746 00:08:24.951 14:53:43 -- common/autotest_common.sh@950 -- # wait 3112746 00:08:25.211 14:53:43 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:25.211 00:08:25.211 real 0m14.068s 00:08:25.211 user 0m55.007s 00:08:25.211 sys 0m1.283s 00:08:25.211 14:53:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.211 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:08:25.211 ************************************ 00:08:25.211 END TEST nvmf_filesystem_no_in_capsule 00:08:25.211 ************************************ 00:08:25.211 14:53:44 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:25.211 14:53:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:25.211 14:53:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.211 14:53:44 -- common/autotest_common.sh@10 -- # set +x 00:08:25.211 ************************************ 00:08:25.211 START TEST nvmf_filesystem_in_capsule 00:08:25.211 ************************************ 00:08:25.211 14:53:44 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:25.211 14:53:44 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:25.211 14:53:44 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:25.211 14:53:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:25.211 14:53:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:25.211 14:53:44 -- common/autotest_common.sh@10 -- # set +x 00:08:25.211 14:53:44 -- nvmf/common.sh@469 -- # nvmfpid=3115624 00:08:25.211 14:53:44 -- nvmf/common.sh@470 -- # waitforlisten 3115624 00:08:25.211 14:53:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.211 14:53:44 -- common/autotest_common.sh@819 -- # '[' -z 3115624 ']' 00:08:25.211 14:53:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.211 14:53:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:25.211 14:53:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.211 14:53:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:25.211 14:53:44 -- common/autotest_common.sh@10 -- # set +x 00:08:25.471 [2024-06-11 14:53:44.090344] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:25.471 [2024-06-11 14:53:44.090399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.471 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.471 [2024-06-11 14:53:44.184556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.471 [2024-06-11 14:53:44.273254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:25.471 [2024-06-11 14:53:44.273395] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.471 [2024-06-11 14:53:44.273406] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.471 [2024-06-11 14:53:44.273416] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.471 [2024-06-11 14:53:44.273457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.471 [2024-06-11 14:53:44.273559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.471 [2024-06-11 14:53:44.273660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.471 [2024-06-11 14:53:44.273660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.408 14:53:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:26.408 14:53:45 -- common/autotest_common.sh@852 -- # return 0 00:08:26.408 14:53:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:26.408 14:53:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:26.408 14:53:45 -- common/autotest_common.sh@10 -- # set +x 00:08:26.408 14:53:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.408 14:53:45 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:26.408 14:53:45 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:26.408 14:53:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.408 14:53:45 -- common/autotest_common.sh@10 -- # set +x 00:08:26.408 [2024-06-11 14:53:45.071780] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.408 14:53:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.408 14:53:45 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:26.408 14:53:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.408 14:53:45 -- common/autotest_common.sh@10 -- # set +x 00:08:26.408 Malloc1 00:08:26.408 14:53:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.408 14:53:45 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.408 14:53:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.408 14:53:45 -- common/autotest_common.sh@10 -- # set +x 00:08:26.408 14:53:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.408 14:53:45 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:26.408 14:53:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.408 14:53:45 -- common/autotest_common.sh@10 -- # set +x 00:08:26.408 14:53:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.408 14:53:45 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.408 14:53:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.408 14:53:45 -- common/autotest_common.sh@10 -- # set +x 00:08:26.408 [2024-06-11 14:53:45.230633] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.408 14:53:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.408 14:53:45 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:26.408 14:53:45 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:26.408 14:53:45 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:26.408 14:53:45 -- common/autotest_common.sh@1359 -- # local bs 00:08:26.408 14:53:45 -- common/autotest_common.sh@1360 -- # local nb 00:08:26.408 14:53:45 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:26.408 14:53:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.408 14:53:45 -- common/autotest_common.sh@10 -- # set +x 00:08:26.668 14:53:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.668 14:53:45 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:26.668 { 00:08:26.668 "name": "Malloc1", 00:08:26.668 "aliases": [ 00:08:26.668 "31605954-048d-4c42-bfe2-c42dceee3237" 00:08:26.668 ], 00:08:26.668 "product_name": "Malloc disk", 00:08:26.668 "block_size": 512, 00:08:26.668 "num_blocks": 1048576, 00:08:26.668 "uuid": "31605954-048d-4c42-bfe2-c42dceee3237", 00:08:26.668 "assigned_rate_limits": { 00:08:26.668 "rw_ios_per_sec": 0, 00:08:26.668 "rw_mbytes_per_sec": 0, 00:08:26.668 "r_mbytes_per_sec": 0, 00:08:26.668 "w_mbytes_per_sec": 0 00:08:26.668 }, 00:08:26.668 "claimed": true, 00:08:26.668 "claim_type": "exclusive_write", 00:08:26.668 "zoned": false, 00:08:26.668 "supported_io_types": { 00:08:26.668 "read": true, 00:08:26.668 "write": true, 00:08:26.668 "unmap": true, 00:08:26.668 "write_zeroes": true, 00:08:26.668 "flush": true, 00:08:26.668 "reset": true, 00:08:26.668 "compare": false, 00:08:26.668 "compare_and_write": false, 00:08:26.668 "abort": true, 00:08:26.668 "nvme_admin": false, 00:08:26.668 "nvme_io": false 00:08:26.668 }, 00:08:26.668 "memory_domains": [ 00:08:26.668 { 00:08:26.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.668 "dma_device_type": 2 00:08:26.668 } 00:08:26.668 ], 00:08:26.668 "driver_specific": {} 00:08:26.668 } 00:08:26.668 ]' 00:08:26.668 14:53:45 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:26.668 14:53:45 -- common/autotest_common.sh@1362 -- # bs=512 00:08:26.668 14:53:45 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:26.668 14:53:45 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:26.668 14:53:45 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:26.668 14:53:45 -- common/autotest_common.sh@1367 -- # echo 512 00:08:26.668 14:53:45 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:26.668 14:53:45 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:28.043 14:53:46 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.043 14:53:46 -- common/autotest_common.sh@1177 -- # local i=0 00:08:28.043 14:53:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.043 14:53:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:28.043 14:53:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:29.946 14:53:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:29.946 14:53:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:29.946 14:53:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.946 14:53:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:29.946 14:53:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.946 14:53:48 -- common/autotest_common.sh@1187 -- # return 0 00:08:29.946 14:53:48 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:29.946 14:53:48 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:29.946 14:53:48 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:29.946 14:53:48 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:29.946 14:53:48 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:29.946 14:53:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:29.946 14:53:48 -- setup/common.sh@80 -- # echo 536870912 00:08:29.946 14:53:48 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:29.946 14:53:48 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:29.946 14:53:48 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:29.946 14:53:48 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:30.204 14:53:48 -- target/filesystem.sh@69 -- # partprobe 00:08:31.140 14:53:49 -- target/filesystem.sh@70 -- # sleep 1 00:08:32.074 14:53:50 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:32.074 14:53:50 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:32.074 14:53:50 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:32.074 14:53:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.074 14:53:50 -- common/autotest_common.sh@10 -- # set +x 00:08:32.074 ************************************ 00:08:32.074 START TEST filesystem_in_capsule_ext4 00:08:32.074 ************************************ 00:08:32.074 14:53:50 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:32.074 14:53:50 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:32.074 14:53:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.074 14:53:50 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:32.074 14:53:50 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:32.074 14:53:50 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:32.074 14:53:50 -- common/autotest_common.sh@904 -- # local i=0 00:08:32.074 14:53:50 -- common/autotest_common.sh@905 -- # local force 00:08:32.074 14:53:50 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:32.074 14:53:50 -- common/autotest_common.sh@908 -- # force=-F 00:08:32.074 14:53:50 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:32.074 mke2fs 1.46.5 (30-Dec-2021) 00:08:32.074 Discarding device blocks: 0/522240 done 00:08:32.074 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:32.074 Filesystem UUID: d1cf8a1d-1a7c-4e1e-8ca0-6fa483b4eafa 00:08:32.074 Superblock backups stored on blocks: 00:08:32.074 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:32.074 00:08:32.074 Allocating group tables: 0/64 done 00:08:32.074 Writing inode tables: 0/64 done 00:08:33.009 Creating journal (8192 blocks): done 00:08:33.009 Writing superblocks and filesystem accounting information: 0/64 done 00:08:33.009 00:08:33.009 14:53:51 -- common/autotest_common.sh@921 -- # return 0 00:08:33.009 14:53:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:33.009 14:53:51 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:33.009 14:53:51 -- target/filesystem.sh@25 -- # sync 00:08:33.009 14:53:51 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:33.009 14:53:51 -- target/filesystem.sh@27 -- # sync 00:08:33.009 14:53:51 -- target/filesystem.sh@29 -- # i=0 00:08:33.009 14:53:51 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:33.009 14:53:51 -- target/filesystem.sh@37 -- # kill -0 3115624 00:08:33.009 14:53:51 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:33.009 14:53:51 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:33.009 14:53:51 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:33.009 14:53:51 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:33.268 00:08:33.268 real 0m1.103s 00:08:33.268 user 0m0.030s 00:08:33.268 sys 0m0.059s 00:08:33.268 14:53:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.268 14:53:51 -- common/autotest_common.sh@10 -- # set +x 00:08:33.268 ************************************ 00:08:33.268 END TEST filesystem_in_capsule_ext4 00:08:33.268 ************************************ 00:08:33.268 14:53:51 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:33.268 14:53:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:33.268 14:53:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.268 14:53:51 -- common/autotest_common.sh@10 -- # set +x 00:08:33.268 ************************************ 00:08:33.268 START TEST filesystem_in_capsule_btrfs 00:08:33.268 ************************************ 00:08:33.268 14:53:51 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:33.268 14:53:51 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:33.268 14:53:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:33.268 14:53:51 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:33.268 14:53:51 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:33.268 14:53:51 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:33.268 14:53:51 -- common/autotest_common.sh@904 -- # local i=0 00:08:33.268 14:53:51 -- common/autotest_common.sh@905 -- # local force 00:08:33.268 14:53:51 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:33.268 14:53:51 -- common/autotest_common.sh@910 -- # force=-f 00:08:33.268 14:53:51 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:33.268 btrfs-progs v6.6.2 00:08:33.268 See https://btrfs.readthedocs.io for more information. 00:08:33.268 00:08:33.268 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:33.268 NOTE: several default settings have changed in version 5.15, please make sure 00:08:33.268 this does not affect your deployments: 00:08:33.268 - DUP for metadata (-m dup) 00:08:33.268 - enabled no-holes (-O no-holes) 00:08:33.268 - enabled free-space-tree (-R free-space-tree) 00:08:33.268 00:08:33.268 Label: (null) 00:08:33.268 UUID: ad94f13c-7df7-42b9-8fc6-a7bd0bdd3874 00:08:33.268 Node size: 16384 00:08:33.268 Sector size: 4096 00:08:33.268 Filesystem size: 510.00MiB 00:08:33.268 Block group profiles: 00:08:33.268 Data: single 8.00MiB 00:08:33.268 Metadata: DUP 32.00MiB 00:08:33.268 System: DUP 8.00MiB 00:08:33.268 SSD detected: yes 00:08:33.268 Zoned device: no 00:08:33.268 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:33.268 Runtime features: free-space-tree 00:08:33.268 Checksum: crc32c 00:08:33.268 Number of devices: 1 00:08:33.268 Devices: 00:08:33.268 ID SIZE PATH 00:08:33.268 1 510.00MiB /dev/nvme0n1p1 00:08:33.268 00:08:33.268 14:53:52 -- common/autotest_common.sh@921 -- # return 0 00:08:33.268 14:53:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:33.835 14:53:52 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:33.835 14:53:52 -- target/filesystem.sh@25 -- # sync 00:08:33.835 14:53:52 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:33.835 14:53:52 -- target/filesystem.sh@27 -- # sync 00:08:33.835 14:53:52 -- target/filesystem.sh@29 -- # i=0 00:08:33.835 14:53:52 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:33.835 14:53:52 -- target/filesystem.sh@37 -- # kill -0 3115624 00:08:33.835 14:53:52 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:33.835 14:53:52 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:33.835 14:53:52 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:33.835 14:53:52 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:33.835 00:08:33.835 real 0m0.578s 00:08:33.835 user 0m0.023s 00:08:33.835 sys 0m0.127s 00:08:33.835 14:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.835 14:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:33.835 ************************************ 00:08:33.835 END TEST filesystem_in_capsule_btrfs 00:08:33.835 ************************************ 00:08:33.835 14:53:52 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:33.835 14:53:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:33.835 14:53:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.835 14:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:33.835 ************************************ 00:08:33.835 START TEST filesystem_in_capsule_xfs 00:08:33.835 ************************************ 00:08:33.835 14:53:52 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:33.835 14:53:52 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:33.835 14:53:52 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:33.835 14:53:52 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:33.835 14:53:52 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:33.835 14:53:52 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:33.835 14:53:52 -- common/autotest_common.sh@904 -- # local i=0 00:08:33.835 14:53:52 -- common/autotest_common.sh@905 -- # local force 00:08:33.835 14:53:52 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:33.835 14:53:52 -- common/autotest_common.sh@910 -- # force=-f 00:08:33.835 14:53:52 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:33.835 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:33.835 = sectsz=512 attr=2, projid32bit=1 00:08:33.835 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:33.835 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:33.835 data = bsize=4096 blocks=130560, imaxpct=25 00:08:33.835 = sunit=0 swidth=0 blks 00:08:33.835 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:33.835 log =internal log bsize=4096 blocks=16384, version=2 00:08:33.835 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:33.835 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:34.771 Discarding blocks...Done. 00:08:34.771 14:53:53 -- common/autotest_common.sh@921 -- # return 0 00:08:34.771 14:53:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:36.732 14:53:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:36.732 14:53:55 -- target/filesystem.sh@25 -- # sync 00:08:36.732 14:53:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:36.732 14:53:55 -- target/filesystem.sh@27 -- # sync 00:08:36.732 14:53:55 -- target/filesystem.sh@29 -- # i=0 00:08:36.732 14:53:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:36.732 14:53:55 -- target/filesystem.sh@37 -- # kill -0 3115624 00:08:36.732 14:53:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:36.732 14:53:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.732 14:53:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.732 14:53:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.732 00:08:36.732 real 0m2.731s 00:08:36.732 user 0m0.019s 00:08:36.732 sys 0m0.077s 00:08:36.732 14:53:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.732 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:08:36.732 ************************************ 00:08:36.732 END TEST filesystem_in_capsule_xfs 00:08:36.732 ************************************ 00:08:36.732 14:53:55 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:36.732 14:53:55 -- target/filesystem.sh@93 -- # sync 00:08:36.732 14:53:55 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:36.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.732 14:53:55 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:36.732 14:53:55 -- common/autotest_common.sh@1198 -- # local i=0 00:08:36.732 14:53:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:36.732 14:53:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.732 14:53:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:36.732 14:53:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.732 14:53:55 -- common/autotest_common.sh@1210 -- # return 0 00:08:36.732 14:53:55 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.732 14:53:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.732 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:08:36.732 14:53:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.732 14:53:55 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:36.732 14:53:55 -- target/filesystem.sh@101 -- # killprocess 3115624 00:08:36.732 14:53:55 -- common/autotest_common.sh@926 -- # '[' -z 3115624 ']' 00:08:36.732 14:53:55 -- common/autotest_common.sh@930 -- # kill -0 3115624 00:08:36.732 14:53:55 -- common/autotest_common.sh@931 -- # uname 00:08:36.732 14:53:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:36.732 14:53:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3115624 00:08:36.732 14:53:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:36.732 14:53:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:36.732 14:53:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3115624' 00:08:36.732 killing process with pid 3115624 00:08:36.732 14:53:55 -- common/autotest_common.sh@945 -- # kill 3115624 00:08:36.732 14:53:55 -- common/autotest_common.sh@950 -- # wait 3115624 00:08:37.300 14:53:55 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:37.300 00:08:37.300 real 0m11.894s 00:08:37.300 user 0m46.450s 00:08:37.300 sys 0m1.261s 00:08:37.300 14:53:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.300 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:08:37.300 ************************************ 00:08:37.300 END TEST nvmf_filesystem_in_capsule 00:08:37.300 ************************************ 00:08:37.300 14:53:55 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:37.300 14:53:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:37.300 14:53:55 -- nvmf/common.sh@116 -- # sync 00:08:37.300 14:53:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:37.300 14:53:55 -- nvmf/common.sh@119 -- # set +e 00:08:37.300 14:53:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:37.300 14:53:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:37.300 rmmod nvme_tcp 00:08:37.300 rmmod nvme_fabrics 00:08:37.300 rmmod nvme_keyring 00:08:37.300 14:53:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:37.300 14:53:56 -- nvmf/common.sh@123 -- # set -e 00:08:37.300 14:53:56 -- nvmf/common.sh@124 -- # return 0 00:08:37.300 14:53:56 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:37.300 14:53:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:37.300 14:53:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:37.300 14:53:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:37.300 14:53:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.300 14:53:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:37.300 14:53:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.300 14:53:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.300 14:53:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.836 14:53:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:39.836 00:08:39.836 real 0m35.133s 00:08:39.836 user 1m43.453s 00:08:39.836 sys 0m7.727s 00:08:39.836 14:53:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.836 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.836 ************************************ 00:08:39.836 END TEST nvmf_filesystem 00:08:39.836 ************************************ 00:08:39.836 14:53:58 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:39.836 14:53:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:39.836 14:53:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.836 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.836 ************************************ 00:08:39.836 START TEST nvmf_discovery 00:08:39.836 ************************************ 00:08:39.836 14:53:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:39.836 * Looking for test storage... 00:08:39.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.836 14:53:58 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.836 14:53:58 -- nvmf/common.sh@7 -- # uname -s 00:08:39.836 14:53:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.836 14:53:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.836 14:53:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.836 14:53:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.836 14:53:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.836 14:53:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.836 14:53:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.836 14:53:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.836 14:53:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.836 14:53:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.836 14:53:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:39.836 14:53:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:39.836 14:53:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.836 14:53:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.836 14:53:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.837 14:53:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.837 14:53:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.837 14:53:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.837 14:53:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.837 14:53:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.837 14:53:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.837 14:53:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.837 14:53:58 -- paths/export.sh@5 -- # export PATH 00:08:39.837 14:53:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.837 14:53:58 -- nvmf/common.sh@46 -- # : 0 00:08:39.837 14:53:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:39.837 14:53:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:39.837 14:53:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:39.837 14:53:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.837 14:53:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.837 14:53:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:39.837 14:53:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:39.837 14:53:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:39.837 14:53:58 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:39.837 14:53:58 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:39.837 14:53:58 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:39.837 14:53:58 -- target/discovery.sh@15 -- # hash nvme 00:08:39.837 14:53:58 -- target/discovery.sh@20 -- # nvmftestinit 00:08:39.837 14:53:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:39.837 14:53:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.837 14:53:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:39.837 14:53:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:39.837 14:53:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:39.837 14:53:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.837 14:53:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.837 14:53:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.837 14:53:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:39.837 14:53:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:39.837 14:53:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:39.837 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:08:46.406 14:54:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:46.406 14:54:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:46.406 14:54:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:46.406 14:54:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:46.406 14:54:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:46.406 14:54:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:46.406 14:54:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:46.406 14:54:04 -- nvmf/common.sh@294 -- # net_devs=() 00:08:46.406 14:54:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:46.406 14:54:04 -- nvmf/common.sh@295 -- # e810=() 00:08:46.406 14:54:04 -- nvmf/common.sh@295 -- # local -ga e810 00:08:46.406 14:54:04 -- nvmf/common.sh@296 -- # x722=() 00:08:46.406 14:54:04 -- nvmf/common.sh@296 -- # local -ga x722 00:08:46.406 14:54:04 -- nvmf/common.sh@297 -- # mlx=() 00:08:46.406 14:54:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:46.406 14:54:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.406 14:54:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:46.406 14:54:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:46.406 14:54:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:46.406 14:54:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.406 14:54:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:46.406 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:46.406 14:54:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.406 14:54:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:46.406 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:46.406 14:54:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:46.406 14:54:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:46.406 14:54:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.406 14:54:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.406 14:54:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.406 14:54:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.406 14:54:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:46.406 Found net devices under 0000:af:00.0: cvl_0_0 00:08:46.406 14:54:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.406 14:54:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.406 14:54:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.406 14:54:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.407 14:54:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.407 14:54:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:46.407 Found net devices under 0000:af:00.1: cvl_0_1 00:08:46.407 14:54:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.407 14:54:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:46.407 14:54:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:46.407 14:54:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:46.407 14:54:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:46.407 14:54:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:46.407 14:54:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.407 14:54:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.407 14:54:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.407 14:54:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:46.407 14:54:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.407 14:54:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.407 14:54:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:46.407 14:54:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.407 14:54:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.407 14:54:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:46.407 14:54:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:46.407 14:54:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.407 14:54:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.407 14:54:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.407 14:54:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.407 14:54:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:46.407 14:54:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.407 14:54:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.407 14:54:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.407 14:54:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:46.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:08:46.407 00:08:46.407 --- 10.0.0.2 ping statistics --- 00:08:46.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.407 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:08:46.407 14:54:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:08:46.407 00:08:46.407 --- 10.0.0.1 ping statistics --- 00:08:46.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.407 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:08:46.407 14:54:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.407 14:54:04 -- nvmf/common.sh@410 -- # return 0 00:08:46.407 14:54:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:46.407 14:54:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.407 14:54:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:46.407 14:54:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:46.407 14:54:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.407 14:54:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:46.407 14:54:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:46.407 14:54:04 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:46.407 14:54:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:46.407 14:54:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.407 14:54:04 -- common/autotest_common.sh@10 -- # set +x 00:08:46.407 14:54:04 -- nvmf/common.sh@469 -- # nvmfpid=3122131 00:08:46.407 14:54:04 -- nvmf/common.sh@470 -- # waitforlisten 3122131 00:08:46.407 14:54:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.407 14:54:04 -- common/autotest_common.sh@819 -- # '[' -z 3122131 ']' 00:08:46.407 14:54:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.407 14:54:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:46.407 14:54:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.407 14:54:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:46.407 14:54:04 -- common/autotest_common.sh@10 -- # set +x 00:08:46.407 [2024-06-11 14:54:04.872760] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:46.407 [2024-06-11 14:54:04.872818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.407 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.407 [2024-06-11 14:54:04.966090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.407 [2024-06-11 14:54:05.054014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.407 [2024-06-11 14:54:05.054165] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.407 [2024-06-11 14:54:05.054176] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.407 [2024-06-11 14:54:05.054186] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.407 [2024-06-11 14:54:05.054238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.407 [2024-06-11 14:54:05.054345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.407 [2024-06-11 14:54:05.054450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.407 [2024-06-11 14:54:05.054450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.975 14:54:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.975 14:54:05 -- common/autotest_common.sh@852 -- # return 0 00:08:46.975 14:54:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:46.975 14:54:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:46.975 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.234 14:54:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.234 14:54:05 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.234 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.234 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.234 [2024-06-11 14:54:05.856792] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.234 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.234 14:54:05 -- target/discovery.sh@26 -- # seq 1 4 00:08:47.234 14:54:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:47.234 14:54:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:47.234 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.234 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.234 Null1 00:08:47.234 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.234 14:54:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:47.234 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.234 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.234 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.234 14:54:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:47.234 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.234 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.234 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.234 14:54:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.234 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.234 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.234 [2024-06-11 14:54:05.905114] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.234 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.234 14:54:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:47.234 14:54:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:47.234 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.234 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.234 Null2 00:08:47.234 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.234 14:54:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:47.234 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.234 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.234 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.234 14:54:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:47.234 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.234 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:47.235 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:47.235 14:54:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:47.235 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 Null3 00:08:47.235 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:47.235 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:47.235 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:47.235 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:47.235 14:54:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:47.235 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 Null4 00:08:47.235 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:47.235 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:47.235 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 14:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:47.235 14:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:06 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.235 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:06 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:47.235 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.235 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.235 14:54:06 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:08:47.494 00:08:47.494 Discovery Log Number of Records 6, Generation counter 6 00:08:47.494 =====Discovery Log Entry 0====== 00:08:47.494 trtype: tcp 00:08:47.494 adrfam: ipv4 00:08:47.494 subtype: current discovery subsystem 00:08:47.494 treq: not required 00:08:47.494 portid: 0 00:08:47.494 trsvcid: 4420 00:08:47.494 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:47.494 traddr: 10.0.0.2 00:08:47.494 eflags: explicit discovery connections, duplicate discovery information 00:08:47.494 sectype: none 00:08:47.494 =====Discovery Log Entry 1====== 00:08:47.494 trtype: tcp 00:08:47.494 adrfam: ipv4 00:08:47.494 subtype: nvme subsystem 00:08:47.494 treq: not required 00:08:47.494 portid: 0 00:08:47.494 trsvcid: 4420 00:08:47.494 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:47.494 traddr: 10.0.0.2 00:08:47.494 eflags: none 00:08:47.494 sectype: none 00:08:47.494 =====Discovery Log Entry 2====== 00:08:47.494 trtype: tcp 00:08:47.494 adrfam: ipv4 00:08:47.494 subtype: nvme subsystem 00:08:47.494 treq: not required 00:08:47.494 portid: 0 00:08:47.494 trsvcid: 4420 00:08:47.494 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:47.494 traddr: 10.0.0.2 00:08:47.494 eflags: none 00:08:47.494 sectype: none 00:08:47.494 =====Discovery Log Entry 3====== 00:08:47.494 trtype: tcp 00:08:47.494 adrfam: ipv4 00:08:47.494 subtype: nvme subsystem 00:08:47.494 treq: not required 00:08:47.494 portid: 0 00:08:47.494 trsvcid: 4420 00:08:47.494 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:47.494 traddr: 10.0.0.2 00:08:47.494 eflags: none 00:08:47.494 sectype: none 00:08:47.494 =====Discovery Log Entry 4====== 00:08:47.494 trtype: tcp 00:08:47.494 adrfam: ipv4 00:08:47.494 subtype: nvme subsystem 00:08:47.494 treq: not required 00:08:47.494 portid: 0 00:08:47.494 trsvcid: 4420 00:08:47.494 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:47.494 traddr: 10.0.0.2 00:08:47.494 eflags: none 00:08:47.494 sectype: none 00:08:47.494 =====Discovery Log Entry 5====== 00:08:47.494 trtype: tcp 00:08:47.494 adrfam: ipv4 00:08:47.494 subtype: discovery subsystem referral 00:08:47.494 treq: not required 00:08:47.494 portid: 0 00:08:47.494 trsvcid: 4430 00:08:47.494 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:47.494 traddr: 10.0.0.2 00:08:47.494 eflags: none 00:08:47.494 sectype: none 00:08:47.494 14:54:06 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:47.494 Perform nvmf subsystem discovery via RPC 00:08:47.494 14:54:06 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 [2024-06-11 14:54:06.234207] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:47.495 [ 00:08:47.495 { 00:08:47.495 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:47.495 "subtype": "Discovery", 00:08:47.495 "listen_addresses": [ 00:08:47.495 { 00:08:47.495 "transport": "TCP", 00:08:47.495 "trtype": "TCP", 00:08:47.495 "adrfam": "IPv4", 00:08:47.495 "traddr": "10.0.0.2", 00:08:47.495 "trsvcid": "4420" 00:08:47.495 } 00:08:47.495 ], 00:08:47.495 "allow_any_host": true, 00:08:47.495 "hosts": [] 00:08:47.495 }, 00:08:47.495 { 00:08:47.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.495 "subtype": "NVMe", 00:08:47.495 "listen_addresses": [ 00:08:47.495 { 00:08:47.495 "transport": "TCP", 00:08:47.495 "trtype": "TCP", 00:08:47.495 "adrfam": "IPv4", 00:08:47.495 "traddr": "10.0.0.2", 00:08:47.495 "trsvcid": "4420" 00:08:47.495 } 00:08:47.495 ], 00:08:47.495 "allow_any_host": true, 00:08:47.495 "hosts": [], 00:08:47.495 "serial_number": "SPDK00000000000001", 00:08:47.495 "model_number": "SPDK bdev Controller", 00:08:47.495 "max_namespaces": 32, 00:08:47.495 "min_cntlid": 1, 00:08:47.495 "max_cntlid": 65519, 00:08:47.495 "namespaces": [ 00:08:47.495 { 00:08:47.495 "nsid": 1, 00:08:47.495 "bdev_name": "Null1", 00:08:47.495 "name": "Null1", 00:08:47.495 "nguid": "84434D8C7D3E450ABD1435E18D63C6FA", 00:08:47.495 "uuid": "84434d8c-7d3e-450a-bd14-35e18d63c6fa" 00:08:47.495 } 00:08:47.495 ] 00:08:47.495 }, 00:08:47.495 { 00:08:47.495 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:47.495 "subtype": "NVMe", 00:08:47.495 "listen_addresses": [ 00:08:47.495 { 00:08:47.495 "transport": "TCP", 00:08:47.495 "trtype": "TCP", 00:08:47.495 "adrfam": "IPv4", 00:08:47.495 "traddr": "10.0.0.2", 00:08:47.495 "trsvcid": "4420" 00:08:47.495 } 00:08:47.495 ], 00:08:47.495 "allow_any_host": true, 00:08:47.495 "hosts": [], 00:08:47.495 "serial_number": "SPDK00000000000002", 00:08:47.495 "model_number": "SPDK bdev Controller", 00:08:47.495 "max_namespaces": 32, 00:08:47.495 "min_cntlid": 1, 00:08:47.495 "max_cntlid": 65519, 00:08:47.495 "namespaces": [ 00:08:47.495 { 00:08:47.495 "nsid": 1, 00:08:47.495 "bdev_name": "Null2", 00:08:47.495 "name": "Null2", 00:08:47.495 "nguid": "6CE8E224439E42079C61B83CE846CDB9", 00:08:47.495 "uuid": "6ce8e224-439e-4207-9c61-b83ce846cdb9" 00:08:47.495 } 00:08:47.495 ] 00:08:47.495 }, 00:08:47.495 { 00:08:47.495 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:47.495 "subtype": "NVMe", 00:08:47.495 "listen_addresses": [ 00:08:47.495 { 00:08:47.495 "transport": "TCP", 00:08:47.495 "trtype": "TCP", 00:08:47.495 "adrfam": "IPv4", 00:08:47.495 "traddr": "10.0.0.2", 00:08:47.495 "trsvcid": "4420" 00:08:47.495 } 00:08:47.495 ], 00:08:47.495 "allow_any_host": true, 00:08:47.495 "hosts": [], 00:08:47.495 "serial_number": "SPDK00000000000003", 00:08:47.495 "model_number": "SPDK bdev Controller", 00:08:47.495 "max_namespaces": 32, 00:08:47.495 "min_cntlid": 1, 00:08:47.495 "max_cntlid": 65519, 00:08:47.495 "namespaces": [ 00:08:47.495 { 00:08:47.495 "nsid": 1, 00:08:47.495 "bdev_name": "Null3", 00:08:47.495 "name": "Null3", 00:08:47.495 "nguid": "81F1A42BC01A46208D98E91932854CEB", 00:08:47.495 "uuid": "81f1a42b-c01a-4620-8d98-e91932854ceb" 00:08:47.495 } 00:08:47.495 ] 00:08:47.495 }, 00:08:47.495 { 00:08:47.495 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:47.495 "subtype": "NVMe", 00:08:47.495 "listen_addresses": [ 00:08:47.495 { 00:08:47.495 "transport": "TCP", 00:08:47.495 "trtype": "TCP", 00:08:47.495 "adrfam": "IPv4", 00:08:47.495 "traddr": "10.0.0.2", 00:08:47.495 "trsvcid": "4420" 00:08:47.495 } 00:08:47.495 ], 00:08:47.495 "allow_any_host": true, 00:08:47.495 "hosts": [], 00:08:47.495 "serial_number": "SPDK00000000000004", 00:08:47.495 "model_number": "SPDK bdev Controller", 00:08:47.495 "max_namespaces": 32, 00:08:47.495 "min_cntlid": 1, 00:08:47.495 "max_cntlid": 65519, 00:08:47.495 "namespaces": [ 00:08:47.495 { 00:08:47.495 "nsid": 1, 00:08:47.495 "bdev_name": "Null4", 00:08:47.495 "name": "Null4", 00:08:47.495 "nguid": "43BE76C95F2241A7888595DADA527489", 00:08:47.495 "uuid": "43be76c9-5f22-41a7-8885-95dada527489" 00:08:47.495 } 00:08:47.495 ] 00:08:47.495 } 00:08:47.495 ] 00:08:47.495 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.495 14:54:06 -- target/discovery.sh@42 -- # seq 1 4 00:08:47.495 14:54:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:47.495 14:54:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.495 14:54:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.495 14:54:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:47.495 14:54:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.495 14:54:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.495 14:54:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:47.495 14:54:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.495 14:54:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.495 14:54:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:47.495 14:54:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.495 14:54:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.495 14:54:06 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.495 14:54:06 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:47.495 14:54:06 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:47.495 14:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.495 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.755 14:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.755 14:54:06 -- target/discovery.sh@49 -- # check_bdevs= 00:08:47.755 14:54:06 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:47.755 14:54:06 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:47.755 14:54:06 -- target/discovery.sh@57 -- # nvmftestfini 00:08:47.755 14:54:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:47.755 14:54:06 -- nvmf/common.sh@116 -- # sync 00:08:47.755 14:54:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:47.755 14:54:06 -- nvmf/common.sh@119 -- # set +e 00:08:47.755 14:54:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:47.755 14:54:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:47.755 rmmod nvme_tcp 00:08:47.755 rmmod nvme_fabrics 00:08:47.755 rmmod nvme_keyring 00:08:47.755 14:54:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:47.755 14:54:06 -- nvmf/common.sh@123 -- # set -e 00:08:47.755 14:54:06 -- nvmf/common.sh@124 -- # return 0 00:08:47.755 14:54:06 -- nvmf/common.sh@477 -- # '[' -n 3122131 ']' 00:08:47.755 14:54:06 -- nvmf/common.sh@478 -- # killprocess 3122131 00:08:47.755 14:54:06 -- common/autotest_common.sh@926 -- # '[' -z 3122131 ']' 00:08:47.755 14:54:06 -- common/autotest_common.sh@930 -- # kill -0 3122131 00:08:47.755 14:54:06 -- common/autotest_common.sh@931 -- # uname 00:08:47.755 14:54:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:47.755 14:54:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3122131 00:08:47.755 14:54:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:47.755 14:54:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:47.755 14:54:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3122131' 00:08:47.755 killing process with pid 3122131 00:08:47.755 14:54:06 -- common/autotest_common.sh@945 -- # kill 3122131 00:08:47.755 [2024-06-11 14:54:06.482392] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:47.755 14:54:06 -- common/autotest_common.sh@950 -- # wait 3122131 00:08:48.015 14:54:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:48.015 14:54:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:48.015 14:54:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:48.015 14:54:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.015 14:54:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:48.015 14:54:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.015 14:54:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.015 14:54:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.552 14:54:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:50.552 00:08:50.552 real 0m10.631s 00:08:50.552 user 0m8.513s 00:08:50.552 sys 0m5.416s 00:08:50.552 14:54:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.552 14:54:08 -- common/autotest_common.sh@10 -- # set +x 00:08:50.552 ************************************ 00:08:50.552 END TEST nvmf_discovery 00:08:50.552 ************************************ 00:08:50.552 14:54:08 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:50.552 14:54:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:50.552 14:54:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.552 14:54:08 -- common/autotest_common.sh@10 -- # set +x 00:08:50.552 ************************************ 00:08:50.552 START TEST nvmf_referrals 00:08:50.552 ************************************ 00:08:50.552 14:54:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:50.552 * Looking for test storage... 00:08:50.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.553 14:54:08 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.553 14:54:08 -- nvmf/common.sh@7 -- # uname -s 00:08:50.553 14:54:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.553 14:54:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.553 14:54:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.553 14:54:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.553 14:54:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.553 14:54:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.553 14:54:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.553 14:54:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.553 14:54:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.553 14:54:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.553 14:54:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:50.553 14:54:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:50.553 14:54:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.553 14:54:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.553 14:54:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.553 14:54:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.553 14:54:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.553 14:54:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.553 14:54:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.553 14:54:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.553 14:54:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.553 14:54:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.553 14:54:08 -- paths/export.sh@5 -- # export PATH 00:08:50.553 14:54:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.553 14:54:08 -- nvmf/common.sh@46 -- # : 0 00:08:50.553 14:54:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:50.553 14:54:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:50.553 14:54:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:50.553 14:54:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.553 14:54:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.553 14:54:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:50.553 14:54:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:50.553 14:54:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:50.553 14:54:08 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:50.553 14:54:08 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:50.553 14:54:08 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:50.553 14:54:08 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:50.553 14:54:08 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:50.553 14:54:08 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:50.553 14:54:08 -- target/referrals.sh@37 -- # nvmftestinit 00:08:50.553 14:54:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:50.553 14:54:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.553 14:54:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:50.553 14:54:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:50.553 14:54:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:50.553 14:54:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.553 14:54:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.553 14:54:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.553 14:54:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:50.553 14:54:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:50.553 14:54:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:50.553 14:54:08 -- common/autotest_common.sh@10 -- # set +x 00:08:57.127 14:54:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:57.127 14:54:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:57.127 14:54:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:57.127 14:54:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:57.127 14:54:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:57.127 14:54:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:57.127 14:54:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:57.127 14:54:15 -- nvmf/common.sh@294 -- # net_devs=() 00:08:57.127 14:54:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:57.127 14:54:15 -- nvmf/common.sh@295 -- # e810=() 00:08:57.127 14:54:15 -- nvmf/common.sh@295 -- # local -ga e810 00:08:57.127 14:54:15 -- nvmf/common.sh@296 -- # x722=() 00:08:57.127 14:54:15 -- nvmf/common.sh@296 -- # local -ga x722 00:08:57.127 14:54:15 -- nvmf/common.sh@297 -- # mlx=() 00:08:57.127 14:54:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:57.127 14:54:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.127 14:54:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:57.127 14:54:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:57.127 14:54:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:57.127 14:54:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:57.127 14:54:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:57.127 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:57.127 14:54:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:57.127 14:54:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:57.127 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:57.127 14:54:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.127 14:54:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:57.128 14:54:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:57.128 14:54:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:57.128 14:54:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:57.128 14:54:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:57.128 14:54:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.128 14:54:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:57.128 14:54:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.128 14:54:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:57.128 Found net devices under 0000:af:00.0: cvl_0_0 00:08:57.128 14:54:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.128 14:54:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:57.128 14:54:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.128 14:54:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:57.128 14:54:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.128 14:54:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:57.128 Found net devices under 0000:af:00.1: cvl_0_1 00:08:57.128 14:54:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.128 14:54:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:57.128 14:54:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:57.128 14:54:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:57.128 14:54:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:57.128 14:54:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:57.128 14:54:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.128 14:54:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.128 14:54:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.128 14:54:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:57.128 14:54:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.128 14:54:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.128 14:54:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:57.128 14:54:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.128 14:54:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.128 14:54:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:57.128 14:54:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:57.128 14:54:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.128 14:54:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.128 14:54:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.128 14:54:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.128 14:54:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:57.128 14:54:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.128 14:54:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.128 14:54:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.128 14:54:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:57.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:08:57.128 00:08:57.128 --- 10.0.0.2 ping statistics --- 00:08:57.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.128 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:08:57.128 14:54:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:08:57.128 00:08:57.128 --- 10.0.0.1 ping statistics --- 00:08:57.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.128 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:08:57.128 14:54:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.128 14:54:15 -- nvmf/common.sh@410 -- # return 0 00:08:57.128 14:54:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:57.128 14:54:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.128 14:54:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:57.128 14:54:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:57.128 14:54:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.128 14:54:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:57.128 14:54:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:57.128 14:54:15 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:57.128 14:54:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:57.128 14:54:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:57.128 14:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:57.128 14:54:15 -- nvmf/common.sh@469 -- # nvmfpid=3126855 00:08:57.128 14:54:15 -- nvmf/common.sh@470 -- # waitforlisten 3126855 00:08:57.128 14:54:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.128 14:54:15 -- common/autotest_common.sh@819 -- # '[' -z 3126855 ']' 00:08:57.128 14:54:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.128 14:54:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:57.128 14:54:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.128 14:54:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:57.128 14:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:57.128 [2024-06-11 14:54:15.462178] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:57.128 [2024-06-11 14:54:15.462231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.128 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.128 [2024-06-11 14:54:15.557732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.128 [2024-06-11 14:54:15.646822] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:57.128 [2024-06-11 14:54:15.646964] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.128 [2024-06-11 14:54:15.646975] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.128 [2024-06-11 14:54:15.646984] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.128 [2024-06-11 14:54:15.647050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.128 [2024-06-11 14:54:15.647105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.128 [2024-06-11 14:54:15.647218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.128 [2024-06-11 14:54:15.647219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.696 14:54:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:57.696 14:54:16 -- common/autotest_common.sh@852 -- # return 0 00:08:57.696 14:54:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:57.696 14:54:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:57.696 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 14:54:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.696 14:54:16 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.696 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.696 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 [2024-06-11 14:54:16.442789] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.696 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.696 14:54:16 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:57.696 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.696 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 [2024-06-11 14:54:16.459012] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:57.696 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.696 14:54:16 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:57.696 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.696 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.696 14:54:16 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:57.696 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.696 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.696 14:54:16 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:57.696 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.696 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.696 14:54:16 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.696 14:54:16 -- target/referrals.sh@48 -- # jq length 00:08:57.696 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.696 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.955 14:54:16 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:57.955 14:54:16 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:57.955 14:54:16 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:57.955 14:54:16 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.955 14:54:16 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:57.955 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.955 14:54:16 -- target/referrals.sh@21 -- # sort 00:08:57.955 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.955 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.955 14:54:16 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:57.955 14:54:16 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:57.955 14:54:16 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:57.955 14:54:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:57.955 14:54:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:57.955 14:54:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.955 14:54:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:57.955 14:54:16 -- target/referrals.sh@26 -- # sort 00:08:58.215 14:54:16 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:58.215 14:54:16 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:58.215 14:54:16 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:58.215 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.215 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.215 14:54:16 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:58.215 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.215 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.215 14:54:16 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:58.215 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.215 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.215 14:54:16 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.215 14:54:16 -- target/referrals.sh@56 -- # jq length 00:08:58.215 14:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.215 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 14:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.215 14:54:16 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:58.215 14:54:16 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:58.215 14:54:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.215 14:54:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.215 14:54:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.215 14:54:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.215 14:54:16 -- target/referrals.sh@26 -- # sort 00:08:58.215 14:54:17 -- target/referrals.sh@26 -- # echo 00:08:58.215 14:54:17 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:58.215 14:54:17 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:58.215 14:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.215 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 14:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.215 14:54:17 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:58.215 14:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.215 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 14:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.215 14:54:17 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:58.215 14:54:17 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:58.215 14:54:17 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.215 14:54:17 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:58.215 14:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.215 14:54:17 -- target/referrals.sh@21 -- # sort 00:08:58.215 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:58.215 14:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.475 14:54:17 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:58.475 14:54:17 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:58.475 14:54:17 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:58.475 14:54:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.475 14:54:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.475 14:54:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.475 14:54:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.475 14:54:17 -- target/referrals.sh@26 -- # sort 00:08:58.475 14:54:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:58.475 14:54:17 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:58.475 14:54:17 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:58.475 14:54:17 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:58.475 14:54:17 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:58.475 14:54:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.475 14:54:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:58.734 14:54:17 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:58.734 14:54:17 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:58.734 14:54:17 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:58.734 14:54:17 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:58.734 14:54:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.734 14:54:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:58.734 14:54:17 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:58.734 14:54:17 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:58.734 14:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.734 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:58.734 14:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.734 14:54:17 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:58.734 14:54:17 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:58.734 14:54:17 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.734 14:54:17 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:58.734 14:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.734 14:54:17 -- target/referrals.sh@21 -- # sort 00:08:58.734 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:58.734 14:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.734 14:54:17 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:58.734 14:54:17 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:58.734 14:54:17 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:58.734 14:54:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.734 14:54:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.734 14:54:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.734 14:54:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.734 14:54:17 -- target/referrals.sh@26 -- # sort 00:08:58.993 14:54:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:58.993 14:54:17 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:58.993 14:54:17 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:58.993 14:54:17 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:58.993 14:54:17 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:58.993 14:54:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.993 14:54:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:58.993 14:54:17 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:58.993 14:54:17 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:58.993 14:54:17 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:58.993 14:54:17 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:58.993 14:54:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.993 14:54:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:59.252 14:54:17 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:59.252 14:54:17 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:59.252 14:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.252 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:59.252 14:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.252 14:54:17 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:59.252 14:54:17 -- target/referrals.sh@82 -- # jq length 00:08:59.252 14:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.252 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:59.252 14:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.252 14:54:17 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:59.252 14:54:17 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:59.252 14:54:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:59.252 14:54:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:59.252 14:54:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.252 14:54:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:59.252 14:54:17 -- target/referrals.sh@26 -- # sort 00:08:59.252 14:54:18 -- target/referrals.sh@26 -- # echo 00:08:59.252 14:54:18 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:59.252 14:54:18 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:59.252 14:54:18 -- target/referrals.sh@86 -- # nvmftestfini 00:08:59.252 14:54:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:59.252 14:54:18 -- nvmf/common.sh@116 -- # sync 00:08:59.252 14:54:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:59.252 14:54:18 -- nvmf/common.sh@119 -- # set +e 00:08:59.252 14:54:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:59.252 14:54:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:59.252 rmmod nvme_tcp 00:08:59.252 rmmod nvme_fabrics 00:08:59.252 rmmod nvme_keyring 00:08:59.511 14:54:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:59.511 14:54:18 -- nvmf/common.sh@123 -- # set -e 00:08:59.511 14:54:18 -- nvmf/common.sh@124 -- # return 0 00:08:59.511 14:54:18 -- nvmf/common.sh@477 -- # '[' -n 3126855 ']' 00:08:59.511 14:54:18 -- nvmf/common.sh@478 -- # killprocess 3126855 00:08:59.511 14:54:18 -- common/autotest_common.sh@926 -- # '[' -z 3126855 ']' 00:08:59.511 14:54:18 -- common/autotest_common.sh@930 -- # kill -0 3126855 00:08:59.511 14:54:18 -- common/autotest_common.sh@931 -- # uname 00:08:59.511 14:54:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:59.511 14:54:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3126855 00:08:59.511 14:54:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:59.511 14:54:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:59.511 14:54:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3126855' 00:08:59.511 killing process with pid 3126855 00:08:59.511 14:54:18 -- common/autotest_common.sh@945 -- # kill 3126855 00:08:59.511 14:54:18 -- common/autotest_common.sh@950 -- # wait 3126855 00:08:59.769 14:54:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:59.769 14:54:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:59.769 14:54:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:59.769 14:54:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.769 14:54:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:59.769 14:54:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.769 14:54:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.769 14:54:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.677 14:54:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:01.677 00:09:01.677 real 0m11.628s 00:09:01.677 user 0m13.602s 00:09:01.677 sys 0m5.603s 00:09:01.677 14:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.677 14:54:20 -- common/autotest_common.sh@10 -- # set +x 00:09:01.677 ************************************ 00:09:01.677 END TEST nvmf_referrals 00:09:01.677 ************************************ 00:09:01.677 14:54:20 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:01.677 14:54:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:01.677 14:54:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:01.677 14:54:20 -- common/autotest_common.sh@10 -- # set +x 00:09:01.677 ************************************ 00:09:01.677 START TEST nvmf_connect_disconnect 00:09:01.677 ************************************ 00:09:01.677 14:54:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:01.936 * Looking for test storage... 00:09:01.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.936 14:54:20 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.936 14:54:20 -- nvmf/common.sh@7 -- # uname -s 00:09:01.936 14:54:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.936 14:54:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.936 14:54:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.936 14:54:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.936 14:54:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.936 14:54:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.936 14:54:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.936 14:54:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.936 14:54:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.936 14:54:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.937 14:54:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:01.937 14:54:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:01.937 14:54:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.937 14:54:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.937 14:54:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.937 14:54:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.937 14:54:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.937 14:54:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.937 14:54:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.937 14:54:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.937 14:54:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.937 14:54:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.937 14:54:20 -- paths/export.sh@5 -- # export PATH 00:09:01.937 14:54:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.937 14:54:20 -- nvmf/common.sh@46 -- # : 0 00:09:01.937 14:54:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:01.937 14:54:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:01.937 14:54:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:01.937 14:54:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.937 14:54:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.937 14:54:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:01.937 14:54:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:01.937 14:54:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:01.937 14:54:20 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.937 14:54:20 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.937 14:54:20 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:01.937 14:54:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:01.937 14:54:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.937 14:54:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:01.937 14:54:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:01.937 14:54:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:01.937 14:54:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.937 14:54:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.937 14:54:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.937 14:54:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:01.937 14:54:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:01.937 14:54:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:01.937 14:54:20 -- common/autotest_common.sh@10 -- # set +x 00:09:08.501 14:54:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:08.501 14:54:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:08.501 14:54:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:08.501 14:54:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:08.501 14:54:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:08.501 14:54:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:08.501 14:54:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:08.502 14:54:26 -- nvmf/common.sh@294 -- # net_devs=() 00:09:08.502 14:54:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:08.502 14:54:26 -- nvmf/common.sh@295 -- # e810=() 00:09:08.502 14:54:26 -- nvmf/common.sh@295 -- # local -ga e810 00:09:08.502 14:54:26 -- nvmf/common.sh@296 -- # x722=() 00:09:08.502 14:54:26 -- nvmf/common.sh@296 -- # local -ga x722 00:09:08.502 14:54:26 -- nvmf/common.sh@297 -- # mlx=() 00:09:08.502 14:54:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:08.502 14:54:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.502 14:54:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:08.502 14:54:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:08.502 14:54:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:08.502 14:54:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:08.502 14:54:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:08.502 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:08.502 14:54:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:08.502 14:54:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:08.502 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:08.502 14:54:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:08.502 14:54:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:08.502 14:54:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.502 14:54:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:08.502 14:54:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.502 14:54:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:08.502 Found net devices under 0000:af:00.0: cvl_0_0 00:09:08.502 14:54:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.502 14:54:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:08.502 14:54:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.502 14:54:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:08.502 14:54:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.502 14:54:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:08.502 Found net devices under 0000:af:00.1: cvl_0_1 00:09:08.502 14:54:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.502 14:54:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:08.502 14:54:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:08.502 14:54:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:08.502 14:54:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.502 14:54:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.502 14:54:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.502 14:54:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:08.502 14:54:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.502 14:54:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.502 14:54:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:08.502 14:54:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.502 14:54:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.502 14:54:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:08.502 14:54:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:08.502 14:54:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.502 14:54:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.502 14:54:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.502 14:54:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.502 14:54:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:08.502 14:54:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.502 14:54:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.502 14:54:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.502 14:54:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:08.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:09:08.502 00:09:08.502 --- 10.0.0.2 ping statistics --- 00:09:08.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.502 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:09:08.502 14:54:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:09:08.502 00:09:08.502 --- 10.0.0.1 ping statistics --- 00:09:08.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.502 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:08.502 14:54:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.502 14:54:26 -- nvmf/common.sh@410 -- # return 0 00:09:08.502 14:54:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:08.502 14:54:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.502 14:54:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:08.502 14:54:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.502 14:54:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:08.502 14:54:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:08.502 14:54:26 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:08.502 14:54:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:08.502 14:54:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:08.502 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.502 14:54:26 -- nvmf/common.sh@469 -- # nvmfpid=3131530 00:09:08.502 14:54:26 -- nvmf/common.sh@470 -- # waitforlisten 3131530 00:09:08.502 14:54:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.502 14:54:26 -- common/autotest_common.sh@819 -- # '[' -z 3131530 ']' 00:09:08.502 14:54:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.502 14:54:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.502 14:54:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.502 14:54:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.502 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.502 [2024-06-11 14:54:26.789742] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:08.502 [2024-06-11 14:54:26.789798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.502 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.502 [2024-06-11 14:54:26.886491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.502 [2024-06-11 14:54:26.970951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:08.502 [2024-06-11 14:54:26.971107] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.502 [2024-06-11 14:54:26.971119] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.502 [2024-06-11 14:54:26.971127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.502 [2024-06-11 14:54:26.971236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.502 [2024-06-11 14:54:26.971336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.502 [2024-06-11 14:54:26.971452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.502 [2024-06-11 14:54:26.971453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.071 14:54:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:09.071 14:54:27 -- common/autotest_common.sh@852 -- # return 0 00:09:09.071 14:54:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:09.071 14:54:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:09.071 14:54:27 -- common/autotest_common.sh@10 -- # set +x 00:09:09.071 14:54:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.071 14:54:27 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:09.071 14:54:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.071 14:54:27 -- common/autotest_common.sh@10 -- # set +x 00:09:09.071 [2024-06-11 14:54:27.764759] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.071 14:54:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.071 14:54:27 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:09.071 14:54:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.071 14:54:27 -- common/autotest_common.sh@10 -- # set +x 00:09:09.071 14:54:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.071 14:54:27 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:09.071 14:54:27 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:09.071 14:54:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.071 14:54:27 -- common/autotest_common.sh@10 -- # set +x 00:09:09.071 14:54:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.071 14:54:27 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.071 14:54:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.071 14:54:27 -- common/autotest_common.sh@10 -- # set +x 00:09:09.071 14:54:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.071 14:54:27 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.071 14:54:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.071 14:54:27 -- common/autotest_common.sh@10 -- # set +x 00:09:09.071 [2024-06-11 14:54:27.820496] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.071 14:54:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.071 14:54:27 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:09.071 14:54:27 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:09.071 14:54:27 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:09.071 14:54:27 -- target/connect_disconnect.sh@34 -- # set +x 00:09:11.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.882 14:58:18 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:59.882 14:58:18 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:59.882 14:58:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:59.882 14:58:18 -- nvmf/common.sh@116 -- # sync 00:12:59.882 14:58:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:59.882 14:58:18 -- nvmf/common.sh@119 -- # set +e 00:12:59.882 14:58:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:59.882 14:58:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:59.882 rmmod nvme_tcp 00:12:59.882 rmmod nvme_fabrics 00:12:59.882 rmmod nvme_keyring 00:12:59.882 14:58:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:59.882 14:58:18 -- nvmf/common.sh@123 -- # set -e 00:12:59.882 14:58:18 -- nvmf/common.sh@124 -- # return 0 00:12:59.882 14:58:18 -- nvmf/common.sh@477 -- # '[' -n 3131530 ']' 00:12:59.882 14:58:18 -- nvmf/common.sh@478 -- # killprocess 3131530 00:12:59.882 14:58:18 -- common/autotest_common.sh@926 -- # '[' -z 3131530 ']' 00:12:59.882 14:58:18 -- common/autotest_common.sh@930 -- # kill -0 3131530 00:12:59.882 14:58:18 -- common/autotest_common.sh@931 -- # uname 00:12:59.882 14:58:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:59.882 14:58:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3131530 00:12:59.882 14:58:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:59.882 14:58:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:59.882 14:58:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3131530' 00:12:59.882 killing process with pid 3131530 00:12:59.882 14:58:18 -- common/autotest_common.sh@945 -- # kill 3131530 00:12:59.882 14:58:18 -- common/autotest_common.sh@950 -- # wait 3131530 00:13:00.141 14:58:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:00.141 14:58:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:00.141 14:58:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:00.141 14:58:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.141 14:58:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:00.141 14:58:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.141 14:58:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.141 14:58:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.676 14:58:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:02.676 00:13:02.676 real 4m0.414s 00:13:02.676 user 15m18.719s 00:13:02.676 sys 0m21.957s 00:13:02.676 14:58:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.676 14:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:02.676 ************************************ 00:13:02.676 END TEST nvmf_connect_disconnect 00:13:02.676 ************************************ 00:13:02.676 14:58:20 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:02.676 14:58:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:02.676 14:58:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:02.676 14:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:02.676 ************************************ 00:13:02.676 START TEST nvmf_multitarget 00:13:02.676 ************************************ 00:13:02.676 14:58:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:02.676 * Looking for test storage... 00:13:02.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.676 14:58:21 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.676 14:58:21 -- nvmf/common.sh@7 -- # uname -s 00:13:02.676 14:58:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.676 14:58:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.676 14:58:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.676 14:58:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.676 14:58:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.676 14:58:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.676 14:58:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.676 14:58:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.676 14:58:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.676 14:58:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.676 14:58:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:02.676 14:58:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:02.676 14:58:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.676 14:58:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.676 14:58:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.676 14:58:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.676 14:58:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.676 14:58:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.676 14:58:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.676 14:58:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.676 14:58:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.676 14:58:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.676 14:58:21 -- paths/export.sh@5 -- # export PATH 00:13:02.676 14:58:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.676 14:58:21 -- nvmf/common.sh@46 -- # : 0 00:13:02.676 14:58:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:02.676 14:58:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:02.676 14:58:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:02.676 14:58:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.676 14:58:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.676 14:58:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:02.676 14:58:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:02.676 14:58:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:02.676 14:58:21 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:02.676 14:58:21 -- target/multitarget.sh@15 -- # nvmftestinit 00:13:02.676 14:58:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:02.676 14:58:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.676 14:58:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:02.676 14:58:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:02.676 14:58:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:02.676 14:58:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.676 14:58:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.676 14:58:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.676 14:58:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:02.676 14:58:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:02.676 14:58:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:02.676 14:58:21 -- common/autotest_common.sh@10 -- # set +x 00:13:09.286 14:58:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:09.286 14:58:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:09.286 14:58:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:09.286 14:58:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:09.286 14:58:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:09.286 14:58:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:09.286 14:58:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:09.286 14:58:27 -- nvmf/common.sh@294 -- # net_devs=() 00:13:09.286 14:58:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:09.286 14:58:27 -- nvmf/common.sh@295 -- # e810=() 00:13:09.286 14:58:27 -- nvmf/common.sh@295 -- # local -ga e810 00:13:09.286 14:58:27 -- nvmf/common.sh@296 -- # x722=() 00:13:09.286 14:58:27 -- nvmf/common.sh@296 -- # local -ga x722 00:13:09.286 14:58:27 -- nvmf/common.sh@297 -- # mlx=() 00:13:09.286 14:58:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:09.286 14:58:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.286 14:58:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.286 14:58:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.286 14:58:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.286 14:58:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.286 14:58:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.286 14:58:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.286 14:58:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.287 14:58:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.287 14:58:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.287 14:58:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.287 14:58:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:09.287 14:58:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:09.287 14:58:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:09.287 14:58:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:09.287 14:58:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:09.287 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:09.287 14:58:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:09.287 14:58:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:09.287 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:09.287 14:58:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:09.287 14:58:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:09.287 14:58:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.287 14:58:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:09.287 14:58:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.287 14:58:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:09.287 Found net devices under 0000:af:00.0: cvl_0_0 00:13:09.287 14:58:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.287 14:58:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:09.287 14:58:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.287 14:58:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:09.287 14:58:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.287 14:58:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:09.287 Found net devices under 0000:af:00.1: cvl_0_1 00:13:09.287 14:58:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.287 14:58:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:09.287 14:58:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:09.287 14:58:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:09.287 14:58:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.287 14:58:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.287 14:58:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.287 14:58:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:09.287 14:58:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.287 14:58:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.287 14:58:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:09.287 14:58:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.287 14:58:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.287 14:58:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:09.287 14:58:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:09.287 14:58:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.287 14:58:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.287 14:58:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.287 14:58:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.287 14:58:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:09.287 14:58:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.287 14:58:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.287 14:58:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.287 14:58:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:09.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:13:09.287 00:13:09.287 --- 10.0.0.2 ping statistics --- 00:13:09.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.287 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:09.287 14:58:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:13:09.287 00:13:09.287 --- 10.0.0.1 ping statistics --- 00:13:09.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.287 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:13:09.287 14:58:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.287 14:58:27 -- nvmf/common.sh@410 -- # return 0 00:13:09.287 14:58:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:09.287 14:58:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.287 14:58:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:09.287 14:58:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.287 14:58:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:09.287 14:58:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:09.287 14:58:27 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:09.287 14:58:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:09.287 14:58:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:09.287 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:13:09.287 14:58:27 -- nvmf/common.sh@469 -- # nvmfpid=3179654 00:13:09.287 14:58:27 -- nvmf/common.sh@470 -- # waitforlisten 3179654 00:13:09.287 14:58:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.287 14:58:27 -- common/autotest_common.sh@819 -- # '[' -z 3179654 ']' 00:13:09.287 14:58:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.287 14:58:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:09.287 14:58:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.287 14:58:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:09.287 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:13:09.287 [2024-06-11 14:58:27.683444] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:09.287 [2024-06-11 14:58:27.683485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.287 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.287 [2024-06-11 14:58:27.757125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.287 [2024-06-11 14:58:27.845844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:09.287 [2024-06-11 14:58:27.845994] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.287 [2024-06-11 14:58:27.846005] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.287 [2024-06-11 14:58:27.846015] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.287 [2024-06-11 14:58:27.848046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.287 [2024-06-11 14:58:27.848065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.287 [2024-06-11 14:58:27.848181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.287 [2024-06-11 14:58:27.848181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.855 14:58:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:09.855 14:58:28 -- common/autotest_common.sh@852 -- # return 0 00:13:09.855 14:58:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:09.855 14:58:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:09.855 14:58:28 -- common/autotest_common.sh@10 -- # set +x 00:13:09.855 14:58:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.855 14:58:28 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:09.855 14:58:28 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:09.855 14:58:28 -- target/multitarget.sh@21 -- # jq length 00:13:10.113 14:58:28 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:10.113 14:58:28 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:10.113 "nvmf_tgt_1" 00:13:10.113 14:58:28 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:10.372 "nvmf_tgt_2" 00:13:10.372 14:58:28 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.372 14:58:28 -- target/multitarget.sh@28 -- # jq length 00:13:10.372 14:58:29 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:10.372 14:58:29 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:10.631 true 00:13:10.631 14:58:29 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:10.631 true 00:13:10.631 14:58:29 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.631 14:58:29 -- target/multitarget.sh@35 -- # jq length 00:13:10.890 14:58:29 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:10.890 14:58:29 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:10.890 14:58:29 -- target/multitarget.sh@41 -- # nvmftestfini 00:13:10.890 14:58:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:10.890 14:58:29 -- nvmf/common.sh@116 -- # sync 00:13:10.890 14:58:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:10.890 14:58:29 -- nvmf/common.sh@119 -- # set +e 00:13:10.890 14:58:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:10.890 14:58:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:10.890 rmmod nvme_tcp 00:13:10.890 rmmod nvme_fabrics 00:13:10.890 rmmod nvme_keyring 00:13:10.890 14:58:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:10.890 14:58:29 -- nvmf/common.sh@123 -- # set -e 00:13:10.890 14:58:29 -- nvmf/common.sh@124 -- # return 0 00:13:10.890 14:58:29 -- nvmf/common.sh@477 -- # '[' -n 3179654 ']' 00:13:10.890 14:58:29 -- nvmf/common.sh@478 -- # killprocess 3179654 00:13:10.890 14:58:29 -- common/autotest_common.sh@926 -- # '[' -z 3179654 ']' 00:13:10.890 14:58:29 -- common/autotest_common.sh@930 -- # kill -0 3179654 00:13:10.890 14:58:29 -- common/autotest_common.sh@931 -- # uname 00:13:10.890 14:58:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:10.890 14:58:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3179654 00:13:10.890 14:58:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:10.890 14:58:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:10.890 14:58:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3179654' 00:13:10.890 killing process with pid 3179654 00:13:10.890 14:58:29 -- common/autotest_common.sh@945 -- # kill 3179654 00:13:10.890 14:58:29 -- common/autotest_common.sh@950 -- # wait 3179654 00:13:11.149 14:58:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:11.149 14:58:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:11.149 14:58:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:11.149 14:58:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.149 14:58:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:11.149 14:58:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.149 14:58:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.149 14:58:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.684 14:58:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:13.684 00:13:13.684 real 0m10.976s 00:13:13.684 user 0m10.704s 00:13:13.684 sys 0m5.431s 00:13:13.684 14:58:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.684 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:13:13.684 ************************************ 00:13:13.684 END TEST nvmf_multitarget 00:13:13.684 ************************************ 00:13:13.684 14:58:31 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:13.684 14:58:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:13.684 14:58:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.684 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:13:13.684 ************************************ 00:13:13.684 START TEST nvmf_rpc 00:13:13.684 ************************************ 00:13:13.684 14:58:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:13.684 * Looking for test storage... 00:13:13.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.684 14:58:32 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.684 14:58:32 -- nvmf/common.sh@7 -- # uname -s 00:13:13.684 14:58:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.684 14:58:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.684 14:58:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.684 14:58:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.684 14:58:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.684 14:58:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.684 14:58:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.684 14:58:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.684 14:58:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.684 14:58:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.684 14:58:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:13.684 14:58:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:13.684 14:58:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.684 14:58:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.684 14:58:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.684 14:58:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.684 14:58:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.684 14:58:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.684 14:58:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.684 14:58:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.684 14:58:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.684 14:58:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.684 14:58:32 -- paths/export.sh@5 -- # export PATH 00:13:13.684 14:58:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.684 14:58:32 -- nvmf/common.sh@46 -- # : 0 00:13:13.684 14:58:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:13.684 14:58:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:13.684 14:58:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:13.684 14:58:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.684 14:58:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.684 14:58:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:13.684 14:58:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:13.684 14:58:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:13.684 14:58:32 -- target/rpc.sh@11 -- # loops=5 00:13:13.684 14:58:32 -- target/rpc.sh@23 -- # nvmftestinit 00:13:13.684 14:58:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:13.684 14:58:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.684 14:58:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:13.684 14:58:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:13.684 14:58:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:13.684 14:58:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.685 14:58:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.685 14:58:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.685 14:58:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:13.685 14:58:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:13.685 14:58:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:13.685 14:58:32 -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 14:58:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:20.251 14:58:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:20.251 14:58:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:20.251 14:58:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:20.251 14:58:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:20.251 14:58:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:20.251 14:58:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:20.251 14:58:38 -- nvmf/common.sh@294 -- # net_devs=() 00:13:20.251 14:58:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:20.251 14:58:38 -- nvmf/common.sh@295 -- # e810=() 00:13:20.251 14:58:38 -- nvmf/common.sh@295 -- # local -ga e810 00:13:20.251 14:58:38 -- nvmf/common.sh@296 -- # x722=() 00:13:20.251 14:58:38 -- nvmf/common.sh@296 -- # local -ga x722 00:13:20.251 14:58:38 -- nvmf/common.sh@297 -- # mlx=() 00:13:20.251 14:58:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:20.251 14:58:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.251 14:58:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:20.251 14:58:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:20.251 14:58:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:20.251 14:58:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:20.251 14:58:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:20.251 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:20.251 14:58:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:20.251 14:58:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:20.251 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:20.251 14:58:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:20.251 14:58:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:20.251 14:58:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.251 14:58:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:20.251 14:58:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.251 14:58:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:20.251 Found net devices under 0000:af:00.0: cvl_0_0 00:13:20.251 14:58:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.251 14:58:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:20.251 14:58:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.251 14:58:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:20.251 14:58:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.251 14:58:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:20.251 Found net devices under 0000:af:00.1: cvl_0_1 00:13:20.251 14:58:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.251 14:58:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:20.251 14:58:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:20.251 14:58:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:20.251 14:58:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.251 14:58:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.251 14:58:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.251 14:58:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:20.251 14:58:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.251 14:58:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.251 14:58:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:20.251 14:58:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.251 14:58:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.251 14:58:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:20.251 14:58:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:20.251 14:58:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.251 14:58:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.251 14:58:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.251 14:58:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.251 14:58:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:20.251 14:58:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.251 14:58:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.251 14:58:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.251 14:58:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:20.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:13:20.251 00:13:20.251 --- 10.0.0.2 ping statistics --- 00:13:20.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.251 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:20.251 14:58:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:13:20.251 00:13:20.251 --- 10.0.0.1 ping statistics --- 00:13:20.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.251 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:13:20.251 14:58:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.251 14:58:38 -- nvmf/common.sh@410 -- # return 0 00:13:20.251 14:58:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:20.251 14:58:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.251 14:58:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:20.251 14:58:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.251 14:58:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:20.251 14:58:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:20.251 14:58:38 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:20.251 14:58:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:20.251 14:58:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:20.251 14:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 14:58:38 -- nvmf/common.sh@469 -- # nvmfpid=3183983 00:13:20.251 14:58:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.252 14:58:38 -- nvmf/common.sh@470 -- # waitforlisten 3183983 00:13:20.252 14:58:38 -- common/autotest_common.sh@819 -- # '[' -z 3183983 ']' 00:13:20.252 14:58:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.252 14:58:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:20.252 14:58:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.252 14:58:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:20.252 14:58:38 -- common/autotest_common.sh@10 -- # set +x 00:13:20.252 [2024-06-11 14:58:38.433133] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:20.252 [2024-06-11 14:58:38.433187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.252 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.252 [2024-06-11 14:58:38.526380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.252 [2024-06-11 14:58:38.614391] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:20.252 [2024-06-11 14:58:38.614538] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.252 [2024-06-11 14:58:38.614549] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.252 [2024-06-11 14:58:38.614559] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.252 [2024-06-11 14:58:38.614612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.252 [2024-06-11 14:58:38.614714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.252 [2024-06-11 14:58:38.614818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.252 [2024-06-11 14:58:38.614819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.819 14:58:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:20.819 14:58:39 -- common/autotest_common.sh@852 -- # return 0 00:13:20.819 14:58:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:20.819 14:58:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:20.819 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:20.819 14:58:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.819 14:58:39 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:20.819 14:58:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.820 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:20.820 14:58:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.820 14:58:39 -- target/rpc.sh@26 -- # stats='{ 00:13:20.820 "tick_rate": 2200000000, 00:13:20.820 "poll_groups": [ 00:13:20.820 { 00:13:20.820 "name": "nvmf_tgt_poll_group_0", 00:13:20.820 "admin_qpairs": 0, 00:13:20.820 "io_qpairs": 0, 00:13:20.820 "current_admin_qpairs": 0, 00:13:20.820 "current_io_qpairs": 0, 00:13:20.820 "pending_bdev_io": 0, 00:13:20.820 "completed_nvme_io": 0, 00:13:20.820 "transports": [] 00:13:20.820 }, 00:13:20.820 { 00:13:20.820 "name": "nvmf_tgt_poll_group_1", 00:13:20.820 "admin_qpairs": 0, 00:13:20.820 "io_qpairs": 0, 00:13:20.820 "current_admin_qpairs": 0, 00:13:20.820 "current_io_qpairs": 0, 00:13:20.820 "pending_bdev_io": 0, 00:13:20.820 "completed_nvme_io": 0, 00:13:20.820 "transports": [] 00:13:20.820 }, 00:13:20.820 { 00:13:20.820 "name": "nvmf_tgt_poll_group_2", 00:13:20.820 "admin_qpairs": 0, 00:13:20.820 "io_qpairs": 0, 00:13:20.820 "current_admin_qpairs": 0, 00:13:20.820 "current_io_qpairs": 0, 00:13:20.820 "pending_bdev_io": 0, 00:13:20.820 "completed_nvme_io": 0, 00:13:20.820 "transports": [] 00:13:20.820 }, 00:13:20.820 { 00:13:20.820 "name": "nvmf_tgt_poll_group_3", 00:13:20.820 "admin_qpairs": 0, 00:13:20.820 "io_qpairs": 0, 00:13:20.820 "current_admin_qpairs": 0, 00:13:20.820 "current_io_qpairs": 0, 00:13:20.820 "pending_bdev_io": 0, 00:13:20.820 "completed_nvme_io": 0, 00:13:20.820 "transports": [] 00:13:20.820 } 00:13:20.820 ] 00:13:20.820 }' 00:13:20.820 14:58:39 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:20.820 14:58:39 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:20.820 14:58:39 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:20.820 14:58:39 -- target/rpc.sh@15 -- # wc -l 00:13:20.820 14:58:39 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:20.820 14:58:39 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:20.820 14:58:39 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:20.820 14:58:39 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.820 14:58:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.820 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:20.820 [2024-06-11 14:58:39.531221] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.820 14:58:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.820 14:58:39 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:20.820 14:58:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.820 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:20.820 14:58:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.820 14:58:39 -- target/rpc.sh@33 -- # stats='{ 00:13:20.820 "tick_rate": 2200000000, 00:13:20.820 "poll_groups": [ 00:13:20.820 { 00:13:20.820 "name": "nvmf_tgt_poll_group_0", 00:13:20.820 "admin_qpairs": 0, 00:13:20.820 "io_qpairs": 0, 00:13:20.820 "current_admin_qpairs": 0, 00:13:20.820 "current_io_qpairs": 0, 00:13:20.820 "pending_bdev_io": 0, 00:13:20.820 "completed_nvme_io": 0, 00:13:20.820 "transports": [ 00:13:20.820 { 00:13:20.820 "trtype": "TCP" 00:13:20.820 } 00:13:20.820 ] 00:13:20.820 }, 00:13:20.820 { 00:13:20.820 "name": "nvmf_tgt_poll_group_1", 00:13:20.820 "admin_qpairs": 0, 00:13:20.820 "io_qpairs": 0, 00:13:20.820 "current_admin_qpairs": 0, 00:13:20.820 "current_io_qpairs": 0, 00:13:20.820 "pending_bdev_io": 0, 00:13:20.820 "completed_nvme_io": 0, 00:13:20.820 "transports": [ 00:13:20.820 { 00:13:20.820 "trtype": "TCP" 00:13:20.820 } 00:13:20.820 ] 00:13:20.820 }, 00:13:20.820 { 00:13:20.820 "name": "nvmf_tgt_poll_group_2", 00:13:20.820 "admin_qpairs": 0, 00:13:20.820 "io_qpairs": 0, 00:13:20.820 "current_admin_qpairs": 0, 00:13:20.820 "current_io_qpairs": 0, 00:13:20.820 "pending_bdev_io": 0, 00:13:20.820 "completed_nvme_io": 0, 00:13:20.820 "transports": [ 00:13:20.820 { 00:13:20.820 "trtype": "TCP" 00:13:20.820 } 00:13:20.820 ] 00:13:20.820 }, 00:13:20.820 { 00:13:20.820 "name": "nvmf_tgt_poll_group_3", 00:13:20.820 "admin_qpairs": 0, 00:13:20.820 "io_qpairs": 0, 00:13:20.820 "current_admin_qpairs": 0, 00:13:20.820 "current_io_qpairs": 0, 00:13:20.820 "pending_bdev_io": 0, 00:13:20.820 "completed_nvme_io": 0, 00:13:20.820 "transports": [ 00:13:20.820 { 00:13:20.820 "trtype": "TCP" 00:13:20.820 } 00:13:20.820 ] 00:13:20.820 } 00:13:20.820 ] 00:13:20.820 }' 00:13:20.820 14:58:39 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:20.820 14:58:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:20.820 14:58:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:20.820 14:58:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:20.820 14:58:39 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:20.820 14:58:39 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:20.820 14:58:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:20.820 14:58:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:20.820 14:58:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:20.820 14:58:39 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:20.820 14:58:39 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:20.820 14:58:39 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:20.820 14:58:39 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:20.820 14:58:39 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:20.820 14:58:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.820 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:21.079 Malloc1 00:13:21.079 14:58:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.079 14:58:39 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:21.079 14:58:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.079 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:21.079 14:58:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.079 14:58:39 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.079 14:58:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.079 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:21.079 14:58:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.079 14:58:39 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:21.079 14:58:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.080 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:21.080 14:58:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.080 14:58:39 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.080 14:58:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.080 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:21.080 [2024-06-11 14:58:39.711645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.080 14:58:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.080 14:58:39 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:21.080 14:58:39 -- common/autotest_common.sh@640 -- # local es=0 00:13:21.080 14:58:39 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:21.080 14:58:39 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:21.080 14:58:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:21.080 14:58:39 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:21.080 14:58:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:21.080 14:58:39 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:21.080 14:58:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:21.080 14:58:39 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:21.080 14:58:39 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:21.080 14:58:39 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:21.080 [2024-06-11 14:58:39.736392] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:13:21.080 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:21.080 could not add new controller: failed to write to nvme-fabrics device 00:13:21.080 14:58:39 -- common/autotest_common.sh@643 -- # es=1 00:13:21.080 14:58:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:21.080 14:58:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:21.080 14:58:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:21.080 14:58:39 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:21.080 14:58:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.080 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:21.080 14:58:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.080 14:58:39 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.458 14:58:41 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:22.458 14:58:41 -- common/autotest_common.sh@1177 -- # local i=0 00:13:22.458 14:58:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.458 14:58:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:22.458 14:58:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:24.361 14:58:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:24.361 14:58:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:24.361 14:58:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.361 14:58:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:24.361 14:58:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.361 14:58:43 -- common/autotest_common.sh@1187 -- # return 0 00:13:24.361 14:58:43 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.620 14:58:43 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.620 14:58:43 -- common/autotest_common.sh@1198 -- # local i=0 00:13:24.620 14:58:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:24.620 14:58:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.620 14:58:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:24.620 14:58:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.620 14:58:43 -- common/autotest_common.sh@1210 -- # return 0 00:13:24.620 14:58:43 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:24.620 14:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.620 14:58:43 -- common/autotest_common.sh@10 -- # set +x 00:13:24.620 14:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.620 14:58:43 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.620 14:58:43 -- common/autotest_common.sh@640 -- # local es=0 00:13:24.620 14:58:43 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.620 14:58:43 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:24.620 14:58:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:24.620 14:58:43 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:24.620 14:58:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:24.620 14:58:43 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:24.620 14:58:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:24.620 14:58:43 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:24.620 14:58:43 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:24.620 14:58:43 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.620 [2024-06-11 14:58:43.277219] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:13:24.620 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:24.620 could not add new controller: failed to write to nvme-fabrics device 00:13:24.620 14:58:43 -- common/autotest_common.sh@643 -- # es=1 00:13:24.620 14:58:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:24.620 14:58:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:24.620 14:58:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:24.620 14:58:43 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:24.620 14:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.620 14:58:43 -- common/autotest_common.sh@10 -- # set +x 00:13:24.620 14:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.620 14:58:43 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.996 14:58:44 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.996 14:58:44 -- common/autotest_common.sh@1177 -- # local i=0 00:13:25.996 14:58:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.996 14:58:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:25.996 14:58:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:27.898 14:58:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:27.898 14:58:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:27.898 14:58:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.898 14:58:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:27.898 14:58:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.898 14:58:46 -- common/autotest_common.sh@1187 -- # return 0 00:13:27.898 14:58:46 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.898 14:58:46 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.898 14:58:46 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.898 14:58:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.898 14:58:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.898 14:58:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.898 14:58:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.158 14:58:46 -- common/autotest_common.sh@1210 -- # return 0 00:13:28.158 14:58:46 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.158 14:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.158 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:13:28.158 14:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.158 14:58:46 -- target/rpc.sh@81 -- # seq 1 5 00:13:28.158 14:58:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:28.158 14:58:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.158 14:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.158 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:13:28.158 14:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.158 14:58:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.158 14:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.158 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:13:28.158 [2024-06-11 14:58:46.781747] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.158 14:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.158 14:58:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:28.158 14:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.158 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:13:28.158 14:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.158 14:58:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.158 14:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.158 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:13:28.158 14:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.158 14:58:46 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.537 14:58:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.537 14:58:48 -- common/autotest_common.sh@1177 -- # local i=0 00:13:29.537 14:58:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.537 14:58:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:29.537 14:58:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:31.443 14:58:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:31.443 14:58:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:31.443 14:58:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.443 14:58:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:31.443 14:58:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.443 14:58:50 -- common/autotest_common.sh@1187 -- # return 0 00:13:31.443 14:58:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.443 14:58:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.443 14:58:50 -- common/autotest_common.sh@1198 -- # local i=0 00:13:31.443 14:58:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:31.443 14:58:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.443 14:58:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:31.443 14:58:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.443 14:58:50 -- common/autotest_common.sh@1210 -- # return 0 00:13:31.443 14:58:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.443 14:58:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.443 14:58:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.443 14:58:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.443 14:58:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.702 14:58:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.702 14:58:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.702 14:58:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.702 14:58:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:31.702 14:58:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:31.702 14:58:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.702 14:58:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.702 14:58:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.702 14:58:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.702 14:58:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.702 14:58:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.702 [2024-06-11 14:58:50.303403] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.702 14:58:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.702 14:58:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:31.702 14:58:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.702 14:58:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.702 14:58:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.702 14:58:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:31.702 14:58:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.702 14:58:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.702 14:58:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.703 14:58:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.081 14:58:51 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.081 14:58:51 -- common/autotest_common.sh@1177 -- # local i=0 00:13:33.081 14:58:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.081 14:58:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:33.081 14:58:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:34.987 14:58:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:34.987 14:58:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:34.987 14:58:53 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.987 14:58:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:34.987 14:58:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.987 14:58:53 -- common/autotest_common.sh@1187 -- # return 0 00:13:34.987 14:58:53 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.987 14:58:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.987 14:58:53 -- common/autotest_common.sh@1198 -- # local i=0 00:13:34.987 14:58:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:34.987 14:58:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.987 14:58:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:34.987 14:58:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.987 14:58:53 -- common/autotest_common.sh@1210 -- # return 0 00:13:34.987 14:58:53 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.987 14:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.987 14:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 14:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.987 14:58:53 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.987 14:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.987 14:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 14:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.987 14:58:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:34.987 14:58:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.987 14:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.987 14:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 14:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.987 14:58:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.987 14:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.987 14:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 [2024-06-11 14:58:53.751906] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.987 14:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.987 14:58:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:34.987 14:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.987 14:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 14:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.987 14:58:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.987 14:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.987 14:58:53 -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 14:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.987 14:58:53 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.366 14:58:55 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.366 14:58:55 -- common/autotest_common.sh@1177 -- # local i=0 00:13:36.366 14:58:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.366 14:58:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:36.366 14:58:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:38.901 14:58:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:38.901 14:58:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:38.901 14:58:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.901 14:58:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:38.901 14:58:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.901 14:58:57 -- common/autotest_common.sh@1187 -- # return 0 00:13:38.901 14:58:57 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.901 14:58:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:38.901 14:58:57 -- common/autotest_common.sh@1198 -- # local i=0 00:13:38.901 14:58:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:38.901 14:58:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.901 14:58:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:38.901 14:58:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.901 14:58:57 -- common/autotest_common.sh@1210 -- # return 0 00:13:38.901 14:58:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.901 14:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.901 14:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:38.901 14:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.901 14:58:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.901 14:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.901 14:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:38.901 14:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.901 14:58:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:38.901 14:58:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.901 14:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.901 14:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:38.901 14:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.901 14:58:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.901 14:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.901 14:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:38.901 [2024-06-11 14:58:57.309202] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.901 14:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.901 14:58:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:38.901 14:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.901 14:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:38.901 14:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.901 14:58:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.901 14:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.901 14:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:38.901 14:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.901 14:58:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:39.839 14:58:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:39.839 14:58:58 -- common/autotest_common.sh@1177 -- # local i=0 00:13:39.839 14:58:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.839 14:58:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:39.839 14:58:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:42.424 14:59:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:42.424 14:59:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:42.424 14:59:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.424 14:59:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:42.424 14:59:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.424 14:59:00 -- common/autotest_common.sh@1187 -- # return 0 00:13:42.424 14:59:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.424 14:59:00 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.424 14:59:00 -- common/autotest_common.sh@1198 -- # local i=0 00:13:42.424 14:59:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:42.424 14:59:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.424 14:59:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:42.424 14:59:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.424 14:59:00 -- common/autotest_common.sh@1210 -- # return 0 00:13:42.424 14:59:00 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.424 14:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.424 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:13:42.424 14:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.424 14:59:00 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.424 14:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.424 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:13:42.424 14:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.424 14:59:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:42.424 14:59:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.424 14:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.424 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:13:42.424 14:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.424 14:59:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.424 14:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.424 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:13:42.424 [2024-06-11 14:59:00.799379] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.424 14:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.424 14:59:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:42.424 14:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.424 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:13:42.424 14:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.424 14:59:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.424 14:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.424 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:13:42.424 14:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.424 14:59:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.361 14:59:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:43.361 14:59:02 -- common/autotest_common.sh@1177 -- # local i=0 00:13:43.361 14:59:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.361 14:59:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:43.361 14:59:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:45.896 14:59:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:45.896 14:59:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:45.896 14:59:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:45.896 14:59:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:45.896 14:59:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.896 14:59:04 -- common/autotest_common.sh@1187 -- # return 0 00:13:45.896 14:59:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.896 14:59:04 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.896 14:59:04 -- common/autotest_common.sh@1198 -- # local i=0 00:13:45.896 14:59:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:45.896 14:59:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.896 14:59:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:45.896 14:59:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.896 14:59:04 -- common/autotest_common.sh@1210 -- # return 0 00:13:45.896 14:59:04 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@99 -- # seq 1 5 00:13:45.896 14:59:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:45.896 14:59:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 [2024-06-11 14:59:04.336893] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:45.896 14:59:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 [2024-06-11 14:59:04.385030] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:45.896 14:59:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 [2024-06-11 14:59:04.433204] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:45.896 14:59:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 [2024-06-11 14:59:04.485397] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.896 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.896 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.896 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.896 14:59:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:45.896 14:59:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.897 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.897 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.897 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.897 14:59:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.897 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.897 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.897 [2024-06-11 14:59:04.533574] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.897 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.897 14:59:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:45.897 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.897 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.897 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.897 14:59:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.897 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.897 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.897 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.897 14:59:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.897 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.897 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.897 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.897 14:59:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.897 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.897 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.897 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.897 14:59:04 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:45.897 14:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.897 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.897 14:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.897 14:59:04 -- target/rpc.sh@110 -- # stats='{ 00:13:45.897 "tick_rate": 2200000000, 00:13:45.897 "poll_groups": [ 00:13:45.897 { 00:13:45.897 "name": "nvmf_tgt_poll_group_0", 00:13:45.897 "admin_qpairs": 2, 00:13:45.897 "io_qpairs": 196, 00:13:45.897 "current_admin_qpairs": 0, 00:13:45.897 "current_io_qpairs": 0, 00:13:45.897 "pending_bdev_io": 0, 00:13:45.897 "completed_nvme_io": 269, 00:13:45.897 "transports": [ 00:13:45.897 { 00:13:45.897 "trtype": "TCP" 00:13:45.897 } 00:13:45.897 ] 00:13:45.897 }, 00:13:45.897 { 00:13:45.897 "name": "nvmf_tgt_poll_group_1", 00:13:45.897 "admin_qpairs": 2, 00:13:45.897 "io_qpairs": 196, 00:13:45.897 "current_admin_qpairs": 0, 00:13:45.897 "current_io_qpairs": 0, 00:13:45.897 "pending_bdev_io": 0, 00:13:45.897 "completed_nvme_io": 380, 00:13:45.897 "transports": [ 00:13:45.897 { 00:13:45.897 "trtype": "TCP" 00:13:45.897 } 00:13:45.897 ] 00:13:45.897 }, 00:13:45.897 { 00:13:45.897 "name": "nvmf_tgt_poll_group_2", 00:13:45.897 "admin_qpairs": 1, 00:13:45.897 "io_qpairs": 196, 00:13:45.897 "current_admin_qpairs": 0, 00:13:45.897 "current_io_qpairs": 0, 00:13:45.897 "pending_bdev_io": 0, 00:13:45.897 "completed_nvme_io": 273, 00:13:45.897 "transports": [ 00:13:45.897 { 00:13:45.897 "trtype": "TCP" 00:13:45.897 } 00:13:45.897 ] 00:13:45.897 }, 00:13:45.897 { 00:13:45.897 "name": "nvmf_tgt_poll_group_3", 00:13:45.897 "admin_qpairs": 2, 00:13:45.897 "io_qpairs": 196, 00:13:45.897 "current_admin_qpairs": 0, 00:13:45.897 "current_io_qpairs": 0, 00:13:45.897 "pending_bdev_io": 0, 00:13:45.897 "completed_nvme_io": 212, 00:13:45.897 "transports": [ 00:13:45.897 { 00:13:45.897 "trtype": "TCP" 00:13:45.897 } 00:13:45.897 ] 00:13:45.897 } 00:13:45.897 ] 00:13:45.897 }' 00:13:45.897 14:59:04 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:45.897 14:59:04 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:45.897 14:59:04 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:45.897 14:59:04 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:45.897 14:59:04 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:45.897 14:59:04 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:45.897 14:59:04 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:45.897 14:59:04 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:45.897 14:59:04 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:45.897 14:59:04 -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:13:45.897 14:59:04 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:45.897 14:59:04 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:45.897 14:59:04 -- target/rpc.sh@123 -- # nvmftestfini 00:13:45.897 14:59:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:45.897 14:59:04 -- nvmf/common.sh@116 -- # sync 00:13:45.897 14:59:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:45.897 14:59:04 -- nvmf/common.sh@119 -- # set +e 00:13:45.897 14:59:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:45.897 14:59:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:45.897 rmmod nvme_tcp 00:13:45.897 rmmod nvme_fabrics 00:13:45.897 rmmod nvme_keyring 00:13:46.157 14:59:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:46.157 14:59:04 -- nvmf/common.sh@123 -- # set -e 00:13:46.157 14:59:04 -- nvmf/common.sh@124 -- # return 0 00:13:46.157 14:59:04 -- nvmf/common.sh@477 -- # '[' -n 3183983 ']' 00:13:46.157 14:59:04 -- nvmf/common.sh@478 -- # killprocess 3183983 00:13:46.157 14:59:04 -- common/autotest_common.sh@926 -- # '[' -z 3183983 ']' 00:13:46.157 14:59:04 -- common/autotest_common.sh@930 -- # kill -0 3183983 00:13:46.157 14:59:04 -- common/autotest_common.sh@931 -- # uname 00:13:46.157 14:59:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:46.157 14:59:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3183983 00:13:46.157 14:59:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:46.157 14:59:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:46.157 14:59:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3183983' 00:13:46.157 killing process with pid 3183983 00:13:46.157 14:59:04 -- common/autotest_common.sh@945 -- # kill 3183983 00:13:46.157 14:59:04 -- common/autotest_common.sh@950 -- # wait 3183983 00:13:46.416 14:59:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:46.416 14:59:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:46.416 14:59:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:46.416 14:59:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.416 14:59:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:46.416 14:59:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.416 14:59:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.416 14:59:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.319 14:59:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:48.319 00:13:48.319 real 0m35.147s 00:13:48.319 user 1m46.971s 00:13:48.319 sys 0m6.782s 00:13:48.319 14:59:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.319 14:59:07 -- common/autotest_common.sh@10 -- # set +x 00:13:48.319 ************************************ 00:13:48.319 END TEST nvmf_rpc 00:13:48.319 ************************************ 00:13:48.319 14:59:07 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:48.319 14:59:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:48.319 14:59:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:48.319 14:59:07 -- common/autotest_common.sh@10 -- # set +x 00:13:48.578 ************************************ 00:13:48.578 START TEST nvmf_invalid 00:13:48.578 ************************************ 00:13:48.578 14:59:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:48.578 * Looking for test storage... 00:13:48.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.578 14:59:07 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.578 14:59:07 -- nvmf/common.sh@7 -- # uname -s 00:13:48.578 14:59:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.578 14:59:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.578 14:59:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.578 14:59:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.578 14:59:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.578 14:59:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.578 14:59:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.578 14:59:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.578 14:59:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.578 14:59:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.578 14:59:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:48.578 14:59:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:48.578 14:59:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.578 14:59:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.578 14:59:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.578 14:59:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.578 14:59:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.578 14:59:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.578 14:59:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.578 14:59:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.578 14:59:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.578 14:59:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.578 14:59:07 -- paths/export.sh@5 -- # export PATH 00:13:48.578 14:59:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.578 14:59:07 -- nvmf/common.sh@46 -- # : 0 00:13:48.578 14:59:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:48.578 14:59:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:48.578 14:59:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:48.578 14:59:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.578 14:59:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.578 14:59:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:48.578 14:59:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:48.578 14:59:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:48.578 14:59:07 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:48.578 14:59:07 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.578 14:59:07 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:48.578 14:59:07 -- target/invalid.sh@14 -- # target=foobar 00:13:48.578 14:59:07 -- target/invalid.sh@16 -- # RANDOM=0 00:13:48.578 14:59:07 -- target/invalid.sh@34 -- # nvmftestinit 00:13:48.578 14:59:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:48.578 14:59:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.578 14:59:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:48.578 14:59:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:48.578 14:59:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:48.578 14:59:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.578 14:59:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.578 14:59:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.578 14:59:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:48.578 14:59:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:48.578 14:59:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:48.578 14:59:07 -- common/autotest_common.sh@10 -- # set +x 00:13:55.146 14:59:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:55.146 14:59:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:55.146 14:59:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:55.146 14:59:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:55.146 14:59:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:55.146 14:59:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:55.146 14:59:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:55.146 14:59:13 -- nvmf/common.sh@294 -- # net_devs=() 00:13:55.146 14:59:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:55.146 14:59:13 -- nvmf/common.sh@295 -- # e810=() 00:13:55.146 14:59:13 -- nvmf/common.sh@295 -- # local -ga e810 00:13:55.146 14:59:13 -- nvmf/common.sh@296 -- # x722=() 00:13:55.146 14:59:13 -- nvmf/common.sh@296 -- # local -ga x722 00:13:55.146 14:59:13 -- nvmf/common.sh@297 -- # mlx=() 00:13:55.146 14:59:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:55.146 14:59:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.146 14:59:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:55.146 14:59:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:55.146 14:59:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:55.146 14:59:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:55.146 14:59:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:55.146 14:59:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:55.146 14:59:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:55.147 14:59:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:55.147 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:55.147 14:59:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:55.147 14:59:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:55.147 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:55.147 14:59:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:55.147 14:59:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:55.147 14:59:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.147 14:59:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:55.147 14:59:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.147 14:59:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:55.147 Found net devices under 0000:af:00.0: cvl_0_0 00:13:55.147 14:59:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.147 14:59:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:55.147 14:59:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.147 14:59:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:55.147 14:59:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.147 14:59:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:55.147 Found net devices under 0000:af:00.1: cvl_0_1 00:13:55.147 14:59:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.147 14:59:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:55.147 14:59:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:55.147 14:59:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:55.147 14:59:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.147 14:59:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.147 14:59:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.147 14:59:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:55.147 14:59:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.147 14:59:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.147 14:59:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:55.147 14:59:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.147 14:59:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.147 14:59:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:55.147 14:59:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:55.147 14:59:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.147 14:59:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.147 14:59:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.147 14:59:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.147 14:59:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:55.147 14:59:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.147 14:59:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.147 14:59:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.147 14:59:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:55.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:13:55.147 00:13:55.147 --- 10.0.0.2 ping statistics --- 00:13:55.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.147 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:55.147 14:59:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:13:55.147 00:13:55.147 --- 10.0.0.1 ping statistics --- 00:13:55.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.147 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:13:55.147 14:59:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.147 14:59:13 -- nvmf/common.sh@410 -- # return 0 00:13:55.147 14:59:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:55.147 14:59:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.147 14:59:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:55.147 14:59:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.147 14:59:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:55.147 14:59:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:55.147 14:59:13 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:55.147 14:59:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:55.147 14:59:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:55.147 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.147 14:59:13 -- nvmf/common.sh@469 -- # nvmfpid=3192969 00:13:55.147 14:59:13 -- nvmf/common.sh@470 -- # waitforlisten 3192969 00:13:55.147 14:59:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.147 14:59:13 -- common/autotest_common.sh@819 -- # '[' -z 3192969 ']' 00:13:55.147 14:59:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.147 14:59:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:55.147 14:59:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.147 14:59:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:55.147 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.147 [2024-06-11 14:59:13.946856] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:55.147 [2024-06-11 14:59:13.946913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.406 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.406 [2024-06-11 14:59:14.040719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.406 [2024-06-11 14:59:14.129179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:55.406 [2024-06-11 14:59:14.129322] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.406 [2024-06-11 14:59:14.129333] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.407 [2024-06-11 14:59:14.129342] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.407 [2024-06-11 14:59:14.129449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.407 [2024-06-11 14:59:14.129572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.407 [2024-06-11 14:59:14.129600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.407 [2024-06-11 14:59:14.129599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.974 14:59:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:55.974 14:59:14 -- common/autotest_common.sh@852 -- # return 0 00:13:55.974 14:59:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:55.974 14:59:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:55.974 14:59:14 -- common/autotest_common.sh@10 -- # set +x 00:13:56.233 14:59:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.233 14:59:14 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:56.233 14:59:14 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31710 00:13:56.233 [2024-06-11 14:59:15.060184] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:56.492 14:59:15 -- target/invalid.sh@40 -- # out='request: 00:13:56.492 { 00:13:56.492 "nqn": "nqn.2016-06.io.spdk:cnode31710", 00:13:56.492 "tgt_name": "foobar", 00:13:56.492 "method": "nvmf_create_subsystem", 00:13:56.492 "req_id": 1 00:13:56.492 } 00:13:56.492 Got JSON-RPC error response 00:13:56.492 response: 00:13:56.492 { 00:13:56.492 "code": -32603, 00:13:56.492 "message": "Unable to find target foobar" 00:13:56.492 }' 00:13:56.492 14:59:15 -- target/invalid.sh@41 -- # [[ request: 00:13:56.492 { 00:13:56.492 "nqn": "nqn.2016-06.io.spdk:cnode31710", 00:13:56.492 "tgt_name": "foobar", 00:13:56.492 "method": "nvmf_create_subsystem", 00:13:56.492 "req_id": 1 00:13:56.492 } 00:13:56.492 Got JSON-RPC error response 00:13:56.492 response: 00:13:56.492 { 00:13:56.492 "code": -32603, 00:13:56.492 "message": "Unable to find target foobar" 00:13:56.492 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:56.492 14:59:15 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:56.492 14:59:15 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4939 00:13:56.492 [2024-06-11 14:59:15.313154] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4939: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:56.751 14:59:15 -- target/invalid.sh@45 -- # out='request: 00:13:56.751 { 00:13:56.751 "nqn": "nqn.2016-06.io.spdk:cnode4939", 00:13:56.751 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:56.751 "method": "nvmf_create_subsystem", 00:13:56.751 "req_id": 1 00:13:56.751 } 00:13:56.751 Got JSON-RPC error response 00:13:56.751 response: 00:13:56.751 { 00:13:56.751 "code": -32602, 00:13:56.751 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:56.751 }' 00:13:56.751 14:59:15 -- target/invalid.sh@46 -- # [[ request: 00:13:56.751 { 00:13:56.751 "nqn": "nqn.2016-06.io.spdk:cnode4939", 00:13:56.751 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:56.752 "method": "nvmf_create_subsystem", 00:13:56.752 "req_id": 1 00:13:56.752 } 00:13:56.752 Got JSON-RPC error response 00:13:56.752 response: 00:13:56.752 { 00:13:56.752 "code": -32602, 00:13:56.752 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:56.752 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:56.752 14:59:15 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:56.752 14:59:15 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26918 00:13:56.752 [2024-06-11 14:59:15.566052] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26918: invalid model number 'SPDK_Controller' 00:13:57.011 14:59:15 -- target/invalid.sh@50 -- # out='request: 00:13:57.011 { 00:13:57.011 "nqn": "nqn.2016-06.io.spdk:cnode26918", 00:13:57.011 "model_number": "SPDK_Controller\u001f", 00:13:57.011 "method": "nvmf_create_subsystem", 00:13:57.011 "req_id": 1 00:13:57.011 } 00:13:57.011 Got JSON-RPC error response 00:13:57.011 response: 00:13:57.011 { 00:13:57.011 "code": -32602, 00:13:57.011 "message": "Invalid MN SPDK_Controller\u001f" 00:13:57.011 }' 00:13:57.011 14:59:15 -- target/invalid.sh@51 -- # [[ request: 00:13:57.011 { 00:13:57.011 "nqn": "nqn.2016-06.io.spdk:cnode26918", 00:13:57.011 "model_number": "SPDK_Controller\u001f", 00:13:57.011 "method": "nvmf_create_subsystem", 00:13:57.011 "req_id": 1 00:13:57.011 } 00:13:57.011 Got JSON-RPC error response 00:13:57.011 response: 00:13:57.011 { 00:13:57.011 "code": -32602, 00:13:57.011 "message": "Invalid MN SPDK_Controller\u001f" 00:13:57.011 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:57.011 14:59:15 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:57.011 14:59:15 -- target/invalid.sh@19 -- # local length=21 ll 00:13:57.011 14:59:15 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:57.011 14:59:15 -- target/invalid.sh@21 -- # local chars 00:13:57.011 14:59:15 -- target/invalid.sh@22 -- # local string 00:13:57.011 14:59:15 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:57.011 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # printf %x 89 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # string+=Y 00:13:57.011 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.011 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # printf %x 123 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # string+='{' 00:13:57.011 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.011 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # printf %x 56 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # string+=8 00:13:57.011 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.011 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # printf %x 86 00:13:57.011 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=V 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 60 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+='<' 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 47 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=/ 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 95 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=_ 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 110 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=n 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 111 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=o 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 49 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=1 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 42 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+='*' 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 83 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=S 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 44 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=, 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 56 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=8 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 65 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=A 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 60 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+='<' 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 111 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=o 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 72 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=H 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 125 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+='}' 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 96 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+='`' 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # printf %x 79 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:57.012 14:59:15 -- target/invalid.sh@25 -- # string+=O 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:57.012 14:59:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:57.012 14:59:15 -- target/invalid.sh@28 -- # [[ Y == \- ]] 00:13:57.012 14:59:15 -- target/invalid.sh@31 -- # echo 'Y{8V /dev/null' 00:14:00.384 14:59:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.291 14:59:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:02.291 00:14:02.291 real 0m13.912s 00:14:02.291 user 0m24.255s 00:14:02.291 sys 0m6.051s 00:14:02.291 14:59:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.291 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:14:02.291 ************************************ 00:14:02.291 END TEST nvmf_invalid 00:14:02.291 ************************************ 00:14:02.291 14:59:21 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:02.291 14:59:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:02.291 14:59:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.291 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:14:02.291 ************************************ 00:14:02.291 START TEST nvmf_abort 00:14:02.291 ************************************ 00:14:02.291 14:59:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:02.550 * Looking for test storage... 00:14:02.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.550 14:59:21 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.550 14:59:21 -- nvmf/common.sh@7 -- # uname -s 00:14:02.550 14:59:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.550 14:59:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.550 14:59:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.550 14:59:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.550 14:59:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.550 14:59:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.550 14:59:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.550 14:59:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.550 14:59:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.550 14:59:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.550 14:59:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:02.550 14:59:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:02.550 14:59:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.550 14:59:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.550 14:59:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.550 14:59:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.550 14:59:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.550 14:59:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.550 14:59:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.550 14:59:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.550 14:59:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.550 14:59:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.550 14:59:21 -- paths/export.sh@5 -- # export PATH 00:14:02.550 14:59:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.550 14:59:21 -- nvmf/common.sh@46 -- # : 0 00:14:02.550 14:59:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:02.550 14:59:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:02.550 14:59:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:02.550 14:59:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.550 14:59:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.550 14:59:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:02.550 14:59:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:02.550 14:59:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:02.550 14:59:21 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.550 14:59:21 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:02.550 14:59:21 -- target/abort.sh@14 -- # nvmftestinit 00:14:02.550 14:59:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:02.550 14:59:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.550 14:59:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:02.550 14:59:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:02.550 14:59:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:02.550 14:59:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.550 14:59:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.550 14:59:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.550 14:59:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:02.550 14:59:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:02.550 14:59:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:02.550 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:14:09.120 14:59:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:09.120 14:59:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:09.120 14:59:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:09.120 14:59:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:09.120 14:59:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:09.120 14:59:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:09.120 14:59:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:09.120 14:59:27 -- nvmf/common.sh@294 -- # net_devs=() 00:14:09.120 14:59:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:09.120 14:59:27 -- nvmf/common.sh@295 -- # e810=() 00:14:09.120 14:59:27 -- nvmf/common.sh@295 -- # local -ga e810 00:14:09.120 14:59:27 -- nvmf/common.sh@296 -- # x722=() 00:14:09.120 14:59:27 -- nvmf/common.sh@296 -- # local -ga x722 00:14:09.120 14:59:27 -- nvmf/common.sh@297 -- # mlx=() 00:14:09.120 14:59:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:09.120 14:59:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.120 14:59:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:09.120 14:59:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:09.120 14:59:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:09.120 14:59:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:09.120 14:59:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:09.120 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:09.120 14:59:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:09.120 14:59:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:09.120 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:09.120 14:59:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:09.120 14:59:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:09.120 14:59:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.120 14:59:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:09.120 14:59:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.120 14:59:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:09.120 Found net devices under 0000:af:00.0: cvl_0_0 00:14:09.120 14:59:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.120 14:59:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:09.120 14:59:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.120 14:59:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:09.120 14:59:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.120 14:59:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:09.120 Found net devices under 0000:af:00.1: cvl_0_1 00:14:09.120 14:59:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.120 14:59:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:09.120 14:59:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:09.120 14:59:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:09.120 14:59:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:09.120 14:59:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.120 14:59:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.120 14:59:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.120 14:59:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:09.120 14:59:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.120 14:59:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.121 14:59:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:09.121 14:59:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.121 14:59:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.121 14:59:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:09.121 14:59:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:09.121 14:59:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.121 14:59:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.121 14:59:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.121 14:59:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.121 14:59:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:09.121 14:59:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.121 14:59:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.121 14:59:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.121 14:59:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:09.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:14:09.121 00:14:09.121 --- 10.0.0.2 ping statistics --- 00:14:09.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.121 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:14:09.121 14:59:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:14:09.121 00:14:09.121 --- 10.0.0.1 ping statistics --- 00:14:09.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.121 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:14:09.121 14:59:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.121 14:59:27 -- nvmf/common.sh@410 -- # return 0 00:14:09.121 14:59:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:09.121 14:59:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.121 14:59:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:09.121 14:59:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:09.121 14:59:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.121 14:59:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:09.121 14:59:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:09.121 14:59:27 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:09.121 14:59:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:09.121 14:59:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:09.121 14:59:27 -- common/autotest_common.sh@10 -- # set +x 00:14:09.121 14:59:27 -- nvmf/common.sh@469 -- # nvmfpid=3198140 00:14:09.121 14:59:27 -- nvmf/common.sh@470 -- # waitforlisten 3198140 00:14:09.121 14:59:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:09.121 14:59:27 -- common/autotest_common.sh@819 -- # '[' -z 3198140 ']' 00:14:09.121 14:59:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.121 14:59:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:09.121 14:59:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.121 14:59:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:09.121 14:59:27 -- common/autotest_common.sh@10 -- # set +x 00:14:09.121 [2024-06-11 14:59:27.778641] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:09.121 [2024-06-11 14:59:27.778696] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.121 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.121 [2024-06-11 14:59:27.866211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.121 [2024-06-11 14:59:27.953559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:09.121 [2024-06-11 14:59:27.953704] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.121 [2024-06-11 14:59:27.953716] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.121 [2024-06-11 14:59:27.953725] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.121 [2024-06-11 14:59:27.953831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.121 [2024-06-11 14:59:27.953946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.121 [2024-06-11 14:59:27.953946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.058 14:59:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:10.058 14:59:28 -- common/autotest_common.sh@852 -- # return 0 00:14:10.058 14:59:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:10.058 14:59:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:10.058 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.058 14:59:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.058 14:59:28 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:10.058 14:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.058 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.058 [2024-06-11 14:59:28.757459] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.058 14:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.058 14:59:28 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:10.058 14:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.058 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.058 Malloc0 00:14:10.058 14:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.058 14:59:28 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:10.058 14:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.058 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.058 Delay0 00:14:10.058 14:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.058 14:59:28 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:10.058 14:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.058 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.058 14:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.059 14:59:28 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:10.059 14:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.059 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.059 14:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.059 14:59:28 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:10.059 14:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.059 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.059 [2024-06-11 14:59:28.828200] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.059 14:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.059 14:59:28 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.059 14:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.059 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.059 14:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.059 14:59:28 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:10.059 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.318 [2024-06-11 14:59:28.991244] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:12.853 Initializing NVMe Controllers 00:14:12.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:12.853 controller IO queue size 128 less than required 00:14:12.853 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:12.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:12.854 Initialization complete. Launching workers. 00:14:12.854 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28857 00:14:12.854 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28918, failed to submit 62 00:14:12.854 success 28857, unsuccess 61, failed 0 00:14:12.854 14:59:31 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:12.854 14:59:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.854 14:59:31 -- common/autotest_common.sh@10 -- # set +x 00:14:12.854 14:59:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.854 14:59:31 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:12.854 14:59:31 -- target/abort.sh@38 -- # nvmftestfini 00:14:12.854 14:59:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:12.854 14:59:31 -- nvmf/common.sh@116 -- # sync 00:14:12.854 14:59:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:12.854 14:59:31 -- nvmf/common.sh@119 -- # set +e 00:14:12.854 14:59:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:12.854 14:59:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:12.854 rmmod nvme_tcp 00:14:12.854 rmmod nvme_fabrics 00:14:12.854 rmmod nvme_keyring 00:14:12.854 14:59:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:12.854 14:59:31 -- nvmf/common.sh@123 -- # set -e 00:14:12.854 14:59:31 -- nvmf/common.sh@124 -- # return 0 00:14:12.854 14:59:31 -- nvmf/common.sh@477 -- # '[' -n 3198140 ']' 00:14:12.854 14:59:31 -- nvmf/common.sh@478 -- # killprocess 3198140 00:14:12.854 14:59:31 -- common/autotest_common.sh@926 -- # '[' -z 3198140 ']' 00:14:12.854 14:59:31 -- common/autotest_common.sh@930 -- # kill -0 3198140 00:14:12.854 14:59:31 -- common/autotest_common.sh@931 -- # uname 00:14:12.854 14:59:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:12.854 14:59:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3198140 00:14:12.854 14:59:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:12.854 14:59:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:12.854 14:59:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3198140' 00:14:12.854 killing process with pid 3198140 00:14:12.854 14:59:31 -- common/autotest_common.sh@945 -- # kill 3198140 00:14:12.854 14:59:31 -- common/autotest_common.sh@950 -- # wait 3198140 00:14:12.854 14:59:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:12.854 14:59:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:12.854 14:59:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:12.854 14:59:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.854 14:59:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:12.854 14:59:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.854 14:59:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.854 14:59:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.759 14:59:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:14.759 00:14:14.759 real 0m12.397s 00:14:14.759 user 0m14.028s 00:14:14.759 sys 0m5.845s 00:14:14.759 14:59:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.759 14:59:33 -- common/autotest_common.sh@10 -- # set +x 00:14:14.759 ************************************ 00:14:14.759 END TEST nvmf_abort 00:14:14.759 ************************************ 00:14:14.759 14:59:33 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:14.759 14:59:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:14.759 14:59:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:14.759 14:59:33 -- common/autotest_common.sh@10 -- # set +x 00:14:14.759 ************************************ 00:14:14.759 START TEST nvmf_ns_hotplug_stress 00:14:14.759 ************************************ 00:14:14.759 14:59:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:15.018 * Looking for test storage... 00:14:15.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.018 14:59:33 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.018 14:59:33 -- nvmf/common.sh@7 -- # uname -s 00:14:15.018 14:59:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.018 14:59:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.018 14:59:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.018 14:59:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.018 14:59:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.018 14:59:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.018 14:59:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.018 14:59:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.018 14:59:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.018 14:59:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.018 14:59:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:15.018 14:59:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:15.018 14:59:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.018 14:59:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.018 14:59:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.018 14:59:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.018 14:59:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.018 14:59:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.018 14:59:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.018 14:59:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.019 14:59:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.019 14:59:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.019 14:59:33 -- paths/export.sh@5 -- # export PATH 00:14:15.019 14:59:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.019 14:59:33 -- nvmf/common.sh@46 -- # : 0 00:14:15.019 14:59:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:15.019 14:59:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:15.019 14:59:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:15.019 14:59:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.019 14:59:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.019 14:59:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:15.019 14:59:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:15.019 14:59:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:15.019 14:59:33 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.019 14:59:33 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:15.019 14:59:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:15.019 14:59:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.019 14:59:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:15.019 14:59:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:15.019 14:59:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:15.019 14:59:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.019 14:59:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.019 14:59:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.019 14:59:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:15.019 14:59:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:15.019 14:59:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:15.019 14:59:33 -- common/autotest_common.sh@10 -- # set +x 00:14:21.638 14:59:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:21.638 14:59:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:21.638 14:59:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:21.638 14:59:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:21.638 14:59:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:21.638 14:59:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:21.638 14:59:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:21.638 14:59:39 -- nvmf/common.sh@294 -- # net_devs=() 00:14:21.638 14:59:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:21.638 14:59:39 -- nvmf/common.sh@295 -- # e810=() 00:14:21.638 14:59:39 -- nvmf/common.sh@295 -- # local -ga e810 00:14:21.638 14:59:39 -- nvmf/common.sh@296 -- # x722=() 00:14:21.638 14:59:39 -- nvmf/common.sh@296 -- # local -ga x722 00:14:21.638 14:59:39 -- nvmf/common.sh@297 -- # mlx=() 00:14:21.638 14:59:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:21.638 14:59:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.638 14:59:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:21.638 14:59:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:21.638 14:59:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:21.638 14:59:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.638 14:59:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:21.638 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:21.638 14:59:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.638 14:59:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:21.638 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:21.638 14:59:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:21.638 14:59:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.638 14:59:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.638 14:59:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.638 14:59:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.638 14:59:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:21.638 Found net devices under 0000:af:00.0: cvl_0_0 00:14:21.638 14:59:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.638 14:59:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.638 14:59:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.638 14:59:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.638 14:59:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.638 14:59:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:21.638 Found net devices under 0000:af:00.1: cvl_0_1 00:14:21.638 14:59:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.638 14:59:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:21.638 14:59:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:21.638 14:59:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:21.638 14:59:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:21.638 14:59:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.638 14:59:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.638 14:59:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.638 14:59:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:21.638 14:59:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.638 14:59:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.638 14:59:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:21.638 14:59:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.638 14:59:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.638 14:59:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:21.638 14:59:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:21.638 14:59:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.638 14:59:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.638 14:59:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.638 14:59:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.638 14:59:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:21.638 14:59:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.638 14:59:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.638 14:59:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.638 14:59:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:21.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:14:21.638 00:14:21.638 --- 10.0.0.2 ping statistics --- 00:14:21.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.638 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:14:21.638 14:59:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:14:21.639 00:14:21.639 --- 10.0.0.1 ping statistics --- 00:14:21.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.639 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:14:21.639 14:59:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.639 14:59:40 -- nvmf/common.sh@410 -- # return 0 00:14:21.639 14:59:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:21.639 14:59:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.639 14:59:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:21.639 14:59:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:21.639 14:59:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.639 14:59:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:21.639 14:59:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:21.639 14:59:40 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:21.639 14:59:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:21.639 14:59:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:21.639 14:59:40 -- common/autotest_common.sh@10 -- # set +x 00:14:21.639 14:59:40 -- nvmf/common.sh@469 -- # nvmfpid=3202834 00:14:21.639 14:59:40 -- nvmf/common.sh@470 -- # waitforlisten 3202834 00:14:21.639 14:59:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:21.639 14:59:40 -- common/autotest_common.sh@819 -- # '[' -z 3202834 ']' 00:14:21.639 14:59:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.639 14:59:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.639 14:59:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.639 14:59:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.639 14:59:40 -- common/autotest_common.sh@10 -- # set +x 00:14:21.639 [2024-06-11 14:59:40.221939] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:21.639 [2024-06-11 14:59:40.221993] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.639 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.639 [2024-06-11 14:59:40.309252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.639 [2024-06-11 14:59:40.393019] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:21.639 [2024-06-11 14:59:40.393169] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.639 [2024-06-11 14:59:40.393181] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.639 [2024-06-11 14:59:40.393195] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.639 [2024-06-11 14:59:40.393306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.639 [2024-06-11 14:59:40.393418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.639 [2024-06-11 14:59:40.393419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.574 14:59:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.574 14:59:41 -- common/autotest_common.sh@852 -- # return 0 00:14:22.574 14:59:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:22.574 14:59:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:22.574 14:59:41 -- common/autotest_common.sh@10 -- # set +x 00:14:22.574 14:59:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.574 14:59:41 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:22.574 14:59:41 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:22.574 [2024-06-11 14:59:41.344043] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.574 14:59:41 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:22.833 14:59:41 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.092 [2024-06-11 14:59:41.838927] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.092 14:59:41 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:23.351 14:59:42 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:23.609 Malloc0 00:14:23.609 14:59:42 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:23.868 Delay0 00:14:23.868 14:59:42 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.127 14:59:42 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:24.385 NULL1 00:14:24.386 14:59:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:24.645 14:59:43 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:24.645 14:59:43 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3203395 00:14:24.645 14:59:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:24.645 14:59:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.645 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.022 Read completed with error (sct=0, sc=11) 00:14:26.022 14:59:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.023 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.023 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.281 14:59:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:26.281 14:59:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:26.281 true 00:14:26.540 14:59:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:26.540 14:59:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.108 14:59:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.367 14:59:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:27.367 14:59:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:27.626 true 00:14:27.626 14:59:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:27.626 14:59:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.885 14:59:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.144 14:59:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:28.144 14:59:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:28.403 true 00:14:28.403 14:59:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:28.403 14:59:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:29.340 14:59:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:29.598 14:59:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:29.598 14:59:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:29.857 true 00:14:29.857 14:59:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:29.857 14:59:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.116 14:59:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.375 14:59:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:30.375 14:59:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:30.375 true 00:14:30.633 14:59:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:30.633 14:59:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.569 14:59:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.828 14:59:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:31.828 14:59:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:31.828 true 00:14:31.828 14:59:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:31.828 14:59:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.087 14:59:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.346 14:59:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:32.346 14:59:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:32.604 true 00:14:32.604 14:59:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:32.604 14:59:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.540 14:59:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:33.798 14:59:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:33.798 14:59:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:34.057 true 00:14:34.057 14:59:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:34.057 14:59:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.315 14:59:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.574 14:59:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:34.574 14:59:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:34.832 true 00:14:34.832 14:59:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:34.832 14:59:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.769 14:59:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.026 14:59:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:36.026 14:59:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:36.285 true 00:14:36.285 14:59:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:36.285 14:59:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.544 14:59:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.802 14:59:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:36.802 14:59:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:36.802 true 00:14:36.802 14:59:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:36.802 14:59:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.179 14:59:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.179 14:59:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:38.179 14:59:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:38.437 true 00:14:38.438 14:59:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:38.438 14:59:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.696 14:59:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.955 14:59:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:38.955 14:59:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:39.213 true 00:14:39.213 14:59:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:39.213 14:59:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.150 14:59:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.408 14:59:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:40.408 14:59:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:40.667 true 00:14:40.667 14:59:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:40.667 14:59:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.926 14:59:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.184 14:59:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:41.184 14:59:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:41.184 true 00:14:41.184 15:00:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:41.184 15:00:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:42.121 15:00:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:42.380 15:00:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:42.380 15:00:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:42.639 true 00:14:42.639 15:00:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:42.639 15:00:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.898 15:00:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.156 15:00:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:43.156 15:00:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:43.413 true 00:14:43.413 15:00:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:43.413 15:00:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.348 15:00:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.606 15:00:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:44.606 15:00:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:44.871 true 00:14:44.871 15:00:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:44.871 15:00:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.130 15:00:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.388 15:00:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:45.388 15:00:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:45.646 true 00:14:45.646 15:00:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:45.646 15:00:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.578 15:00:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.866 15:00:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:46.866 15:00:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:47.154 true 00:14:47.154 15:00:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:47.155 15:00:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.412 15:00:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.670 15:00:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:47.670 15:00:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:47.930 true 00:14:47.930 15:00:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:47.930 15:00:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.868 15:00:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:48.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:48.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:48.868 15:00:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:48.868 15:00:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:49.126 true 00:14:49.126 15:00:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:49.126 15:00:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.384 15:00:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.642 15:00:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:49.642 15:00:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:49.900 true 00:14:49.900 15:00:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:49.900 15:00:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.271 15:00:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:51.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:51.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:51.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:51.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:51.271 15:00:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:51.271 15:00:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:51.530 true 00:14:51.530 15:00:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:51.530 15:00:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.467 15:00:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.467 15:00:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:52.467 15:00:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:52.726 true 00:14:52.726 15:00:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:52.726 15:00:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.985 15:00:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.243 15:00:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:53.243 15:00:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:53.500 true 00:14:53.500 15:00:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:53.500 15:00:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.435 15:00:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.695 15:00:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:54.695 15:00:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:54.695 true 00:14:54.695 15:00:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:54.695 15:00:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.953 Initializing NVMe Controllers 00:14:54.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:54.953 Controller IO queue size 128, less than required. 00:14:54.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.953 Controller IO queue size 128, less than required. 00:14:54.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:54.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:54.953 Initialization complete. Launching workers. 00:14:54.953 ======================================================== 00:14:54.953 Latency(us) 00:14:54.953 Device Information : IOPS MiB/s Average min max 00:14:54.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 995.89 0.49 74478.47 2811.17 1126401.64 00:14:54.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17284.51 8.44 7405.78 2121.32 455386.69 00:14:54.953 ======================================================== 00:14:54.953 Total : 18280.40 8.93 11059.79 2121.32 1126401.64 00:14:54.953 00:14:54.953 15:00:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.212 15:00:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:55.212 15:00:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:55.471 true 00:14:55.471 15:00:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3203395 00:14:55.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3203395) - No such process 00:14:55.471 15:00:14 -- target/ns_hotplug_stress.sh@53 -- # wait 3203395 00:14:55.471 15:00:14 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.730 15:00:14 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:55.989 15:00:14 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:55.989 15:00:14 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:55.989 15:00:14 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:55.989 15:00:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:55.989 15:00:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:56.248 null0 00:14:56.248 15:00:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:56.248 15:00:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:56.248 15:00:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:56.506 null1 00:14:56.506 15:00:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:56.506 15:00:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:56.506 15:00:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:56.765 null2 00:14:56.765 15:00:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:56.765 15:00:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:56.765 15:00:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:57.024 null3 00:14:57.024 15:00:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:57.024 15:00:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:57.024 15:00:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:57.283 null4 00:14:57.283 15:00:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:57.283 15:00:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:57.283 15:00:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:57.542 null5 00:14:57.542 15:00:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:57.542 15:00:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:57.542 15:00:16 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:57.801 null6 00:14:57.801 15:00:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:57.801 15:00:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:57.801 15:00:16 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:58.060 null7 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.060 15:00:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@66 -- # wait 3210088 3210089 3210091 3210093 3210095 3210097 3210099 3210101 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.061 15:00:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:58.320 15:00:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:58.320 15:00:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:58.320 15:00:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.320 15:00:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:58.320 15:00:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.320 15:00:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:58.320 15:00:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:58.320 15:00:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.579 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.838 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:59.098 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:59.358 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.358 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:59.358 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:59.358 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.358 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:59.358 15:00:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.358 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:59.617 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.617 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.617 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:59.617 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:59.617 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:59.617 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.617 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:59.618 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:59.618 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.618 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:59.618 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.875 15:00:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:00.134 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:00.134 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:00.134 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:00.134 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:00.134 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:00.134 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:00.134 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:00.134 15:00:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.393 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:00.652 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:00.652 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:00.652 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:00.652 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:00.652 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.652 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:00.652 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:00.652 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.919 15:00:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:01.181 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:01.181 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:01.181 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:01.181 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:01.181 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:01.181 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:01.182 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:01.182 15:00:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.441 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.700 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:01.959 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:02.218 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:02.218 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:02.218 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:02.218 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:02.218 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:02.218 15:00:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.218 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.218 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.218 15:00:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:02.218 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.218 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.218 15:00:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:02.478 15:00:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.738 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:02.996 15:00:21 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:02.996 15:00:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:02.996 15:00:21 -- nvmf/common.sh@116 -- # sync 00:15:02.996 15:00:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:02.996 15:00:21 -- nvmf/common.sh@119 -- # set +e 00:15:02.996 15:00:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:02.996 15:00:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:02.996 rmmod nvme_tcp 00:15:02.996 rmmod nvme_fabrics 00:15:02.996 rmmod nvme_keyring 00:15:02.996 15:00:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.996 15:00:21 -- nvmf/common.sh@123 -- # set -e 00:15:02.996 15:00:21 -- nvmf/common.sh@124 -- # return 0 00:15:02.996 15:00:21 -- nvmf/common.sh@477 -- # '[' -n 3202834 ']' 00:15:02.996 15:00:21 -- nvmf/common.sh@478 -- # killprocess 3202834 00:15:02.996 15:00:21 -- common/autotest_common.sh@926 -- # '[' -z 3202834 ']' 00:15:02.996 15:00:21 -- common/autotest_common.sh@930 -- # kill -0 3202834 00:15:02.996 15:00:21 -- common/autotest_common.sh@931 -- # uname 00:15:02.996 15:00:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.996 15:00:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3202834 00:15:02.996 15:00:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:02.996 15:00:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:02.996 15:00:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3202834' 00:15:02.996 killing process with pid 3202834 00:15:02.996 15:00:21 -- common/autotest_common.sh@945 -- # kill 3202834 00:15:02.996 15:00:21 -- common/autotest_common.sh@950 -- # wait 3202834 00:15:03.255 15:00:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:03.255 15:00:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:03.255 15:00:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:03.255 15:00:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.255 15:00:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:03.255 15:00:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.255 15:00:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.255 15:00:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.789 15:00:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:05.789 00:15:05.789 real 0m50.536s 00:15:05.789 user 3m29.895s 00:15:05.789 sys 0m16.255s 00:15:05.789 15:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.789 15:00:24 -- common/autotest_common.sh@10 -- # set +x 00:15:05.789 ************************************ 00:15:05.789 END TEST nvmf_ns_hotplug_stress 00:15:05.789 ************************************ 00:15:05.789 15:00:24 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:05.789 15:00:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:05.789 15:00:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:05.789 15:00:24 -- common/autotest_common.sh@10 -- # set +x 00:15:05.789 ************************************ 00:15:05.789 START TEST nvmf_connect_stress 00:15:05.789 ************************************ 00:15:05.789 15:00:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:05.789 * Looking for test storage... 00:15:05.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.789 15:00:24 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.789 15:00:24 -- nvmf/common.sh@7 -- # uname -s 00:15:05.789 15:00:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.789 15:00:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.789 15:00:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.789 15:00:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.789 15:00:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.789 15:00:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.789 15:00:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.789 15:00:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.789 15:00:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.789 15:00:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.789 15:00:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:05.789 15:00:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:05.789 15:00:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.789 15:00:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.789 15:00:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.789 15:00:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.789 15:00:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.789 15:00:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.789 15:00:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.789 15:00:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.789 15:00:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.789 15:00:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.789 15:00:24 -- paths/export.sh@5 -- # export PATH 00:15:05.789 15:00:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.789 15:00:24 -- nvmf/common.sh@46 -- # : 0 00:15:05.789 15:00:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:05.789 15:00:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:05.789 15:00:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:05.789 15:00:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.789 15:00:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.789 15:00:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:05.789 15:00:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:05.789 15:00:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:05.789 15:00:24 -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:05.789 15:00:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:05.789 15:00:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.789 15:00:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:05.789 15:00:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:05.789 15:00:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:05.789 15:00:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.789 15:00:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.789 15:00:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.789 15:00:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:05.789 15:00:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:05.789 15:00:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:05.789 15:00:24 -- common/autotest_common.sh@10 -- # set +x 00:15:12.354 15:00:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:12.354 15:00:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:12.354 15:00:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:12.354 15:00:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:12.354 15:00:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:12.354 15:00:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:12.354 15:00:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:12.354 15:00:30 -- nvmf/common.sh@294 -- # net_devs=() 00:15:12.354 15:00:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:12.354 15:00:30 -- nvmf/common.sh@295 -- # e810=() 00:15:12.354 15:00:30 -- nvmf/common.sh@295 -- # local -ga e810 00:15:12.354 15:00:30 -- nvmf/common.sh@296 -- # x722=() 00:15:12.354 15:00:30 -- nvmf/common.sh@296 -- # local -ga x722 00:15:12.354 15:00:30 -- nvmf/common.sh@297 -- # mlx=() 00:15:12.354 15:00:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:12.354 15:00:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.354 15:00:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.354 15:00:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.354 15:00:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.354 15:00:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.354 15:00:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.355 15:00:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.355 15:00:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.355 15:00:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.355 15:00:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.355 15:00:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.355 15:00:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:12.355 15:00:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:12.355 15:00:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:12.355 15:00:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:12.355 15:00:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:12.355 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:12.355 15:00:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:12.355 15:00:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:12.355 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:12.355 15:00:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:12.355 15:00:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:12.355 15:00:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.355 15:00:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:12.355 15:00:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.355 15:00:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:12.355 Found net devices under 0000:af:00.0: cvl_0_0 00:15:12.355 15:00:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.355 15:00:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:12.355 15:00:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.355 15:00:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:12.355 15:00:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.355 15:00:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:12.355 Found net devices under 0000:af:00.1: cvl_0_1 00:15:12.355 15:00:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.355 15:00:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:12.355 15:00:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:12.355 15:00:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:12.355 15:00:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.355 15:00:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.355 15:00:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.355 15:00:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:12.355 15:00:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.355 15:00:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.355 15:00:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:12.355 15:00:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.355 15:00:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.355 15:00:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:12.355 15:00:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:12.355 15:00:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.355 15:00:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.355 15:00:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.355 15:00:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.355 15:00:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:12.355 15:00:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.355 15:00:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.355 15:00:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.355 15:00:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:12.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:15:12.355 00:15:12.355 --- 10.0.0.2 ping statistics --- 00:15:12.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.355 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:15:12.355 15:00:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:15:12.355 00:15:12.355 --- 10.0.0.1 ping statistics --- 00:15:12.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.355 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:15:12.355 15:00:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.355 15:00:30 -- nvmf/common.sh@410 -- # return 0 00:15:12.355 15:00:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:12.355 15:00:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.355 15:00:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:12.355 15:00:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.355 15:00:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:12.355 15:00:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:12.355 15:00:30 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:12.355 15:00:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:12.355 15:00:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:12.355 15:00:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.355 15:00:30 -- nvmf/common.sh@469 -- # nvmfpid=3215084 00:15:12.355 15:00:30 -- nvmf/common.sh@470 -- # waitforlisten 3215084 00:15:12.355 15:00:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:12.355 15:00:30 -- common/autotest_common.sh@819 -- # '[' -z 3215084 ']' 00:15:12.355 15:00:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.355 15:00:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:12.355 15:00:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.355 15:00:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:12.355 15:00:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.355 [2024-06-11 15:00:30.709506] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:12.355 [2024-06-11 15:00:30.709560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.355 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.355 [2024-06-11 15:00:30.797252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:12.355 [2024-06-11 15:00:30.884768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:12.355 [2024-06-11 15:00:30.884913] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.355 [2024-06-11 15:00:30.884925] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.355 [2024-06-11 15:00:30.884935] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.355 [2024-06-11 15:00:30.885046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.355 [2024-06-11 15:00:30.885161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.355 [2024-06-11 15:00:30.885163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.922 15:00:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:12.922 15:00:31 -- common/autotest_common.sh@852 -- # return 0 00:15:12.922 15:00:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:12.922 15:00:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:12.922 15:00:31 -- common/autotest_common.sh@10 -- # set +x 00:15:12.922 15:00:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.922 15:00:31 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.922 15:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.922 15:00:31 -- common/autotest_common.sh@10 -- # set +x 00:15:12.922 [2024-06-11 15:00:31.684659] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.922 15:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.922 15:00:31 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:12.922 15:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.922 15:00:31 -- common/autotest_common.sh@10 -- # set +x 00:15:12.922 15:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.922 15:00:31 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.922 15:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.922 15:00:31 -- common/autotest_common.sh@10 -- # set +x 00:15:12.922 [2024-06-11 15:00:31.716149] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.922 15:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.922 15:00:31 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:12.922 15:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.922 15:00:31 -- common/autotest_common.sh@10 -- # set +x 00:15:12.922 NULL1 00:15:12.922 15:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.922 15:00:31 -- target/connect_stress.sh@21 -- # PERF_PID=3215367 00:15:12.922 15:00:31 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:12.922 15:00:31 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:12.922 15:00:31 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:12.922 15:00:31 -- target/connect_stress.sh@27 -- # seq 1 20 00:15:12.922 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.922 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:12.922 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.922 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:12.922 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.922 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:12.922 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.922 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:12.922 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.922 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:12.922 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.922 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.922 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.180 15:00:31 -- target/connect_stress.sh@28 -- # cat 00:15:13.180 15:00:31 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:13.180 15:00:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.180 15:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.180 15:00:31 -- common/autotest_common.sh@10 -- # set +x 00:15:13.443 15:00:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.443 15:00:32 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:13.443 15:00:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.443 15:00:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.443 15:00:32 -- common/autotest_common.sh@10 -- # set +x 00:15:13.704 15:00:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.704 15:00:32 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:13.704 15:00:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.704 15:00:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.704 15:00:32 -- common/autotest_common.sh@10 -- # set +x 00:15:13.961 15:00:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.961 15:00:32 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:13.961 15:00:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.961 15:00:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.961 15:00:32 -- common/autotest_common.sh@10 -- # set +x 00:15:14.525 15:00:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.526 15:00:33 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:14.526 15:00:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.526 15:00:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.526 15:00:33 -- common/autotest_common.sh@10 -- # set +x 00:15:14.783 15:00:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.783 15:00:33 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:14.783 15:00:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.783 15:00:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.783 15:00:33 -- common/autotest_common.sh@10 -- # set +x 00:15:15.041 15:00:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.041 15:00:33 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:15.041 15:00:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.041 15:00:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.041 15:00:33 -- common/autotest_common.sh@10 -- # set +x 00:15:15.299 15:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.299 15:00:34 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:15.300 15:00:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.300 15:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.300 15:00:34 -- common/autotest_common.sh@10 -- # set +x 00:15:15.560 15:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.560 15:00:34 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:15.560 15:00:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.560 15:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.560 15:00:34 -- common/autotest_common.sh@10 -- # set +x 00:15:16.169 15:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.169 15:00:34 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:16.169 15:00:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.169 15:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.169 15:00:34 -- common/autotest_common.sh@10 -- # set +x 00:15:16.428 15:00:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.429 15:00:35 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:16.429 15:00:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.429 15:00:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.429 15:00:35 -- common/autotest_common.sh@10 -- # set +x 00:15:16.687 15:00:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.687 15:00:35 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:16.687 15:00:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.687 15:00:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.687 15:00:35 -- common/autotest_common.sh@10 -- # set +x 00:15:16.944 15:00:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.944 15:00:35 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:16.944 15:00:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.944 15:00:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.944 15:00:35 -- common/autotest_common.sh@10 -- # set +x 00:15:17.202 15:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.202 15:00:36 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:17.202 15:00:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.202 15:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.202 15:00:36 -- common/autotest_common.sh@10 -- # set +x 00:15:17.769 15:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.769 15:00:36 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:17.769 15:00:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.769 15:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.769 15:00:36 -- common/autotest_common.sh@10 -- # set +x 00:15:18.028 15:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.028 15:00:36 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:18.028 15:00:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.028 15:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.028 15:00:36 -- common/autotest_common.sh@10 -- # set +x 00:15:18.287 15:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.287 15:00:36 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:18.287 15:00:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.287 15:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.287 15:00:36 -- common/autotest_common.sh@10 -- # set +x 00:15:18.544 15:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.544 15:00:37 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:18.544 15:00:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.544 15:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.544 15:00:37 -- common/autotest_common.sh@10 -- # set +x 00:15:18.802 15:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.802 15:00:37 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:18.802 15:00:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.802 15:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.802 15:00:37 -- common/autotest_common.sh@10 -- # set +x 00:15:19.370 15:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.370 15:00:37 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:19.370 15:00:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.370 15:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.370 15:00:37 -- common/autotest_common.sh@10 -- # set +x 00:15:19.629 15:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.629 15:00:38 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:19.629 15:00:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.629 15:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.629 15:00:38 -- common/autotest_common.sh@10 -- # set +x 00:15:19.887 15:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.887 15:00:38 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:19.887 15:00:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.887 15:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.887 15:00:38 -- common/autotest_common.sh@10 -- # set +x 00:15:20.146 15:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.146 15:00:38 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:20.146 15:00:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.146 15:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.146 15:00:38 -- common/autotest_common.sh@10 -- # set +x 00:15:20.405 15:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.405 15:00:39 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:20.405 15:00:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.405 15:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.405 15:00:39 -- common/autotest_common.sh@10 -- # set +x 00:15:20.973 15:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.973 15:00:39 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:20.973 15:00:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.973 15:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.973 15:00:39 -- common/autotest_common.sh@10 -- # set +x 00:15:21.240 15:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.240 15:00:39 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:21.240 15:00:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.240 15:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.240 15:00:39 -- common/autotest_common.sh@10 -- # set +x 00:15:21.498 15:00:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.498 15:00:40 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:21.498 15:00:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.498 15:00:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.498 15:00:40 -- common/autotest_common.sh@10 -- # set +x 00:15:21.756 15:00:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.756 15:00:40 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:21.756 15:00:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.756 15:00:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.756 15:00:40 -- common/autotest_common.sh@10 -- # set +x 00:15:22.013 15:00:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.013 15:00:40 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:22.013 15:00:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.013 15:00:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.013 15:00:40 -- common/autotest_common.sh@10 -- # set +x 00:15:22.580 15:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.580 15:00:41 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:22.580 15:00:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.580 15:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.580 15:00:41 -- common/autotest_common.sh@10 -- # set +x 00:15:22.839 15:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.839 15:00:41 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:22.839 15:00:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.839 15:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.839 15:00:41 -- common/autotest_common.sh@10 -- # set +x 00:15:23.097 15:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.097 15:00:41 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:23.097 15:00:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.097 15:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.097 15:00:41 -- common/autotest_common.sh@10 -- # set +x 00:15:23.097 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:23.355 15:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.355 15:00:42 -- target/connect_stress.sh@34 -- # kill -0 3215367 00:15:23.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3215367) - No such process 00:15:23.355 15:00:42 -- target/connect_stress.sh@38 -- # wait 3215367 00:15:23.356 15:00:42 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:23.356 15:00:42 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:23.356 15:00:42 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:23.356 15:00:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:23.356 15:00:42 -- nvmf/common.sh@116 -- # sync 00:15:23.356 15:00:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:23.356 15:00:42 -- nvmf/common.sh@119 -- # set +e 00:15:23.356 15:00:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:23.356 15:00:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:23.356 rmmod nvme_tcp 00:15:23.356 rmmod nvme_fabrics 00:15:23.356 rmmod nvme_keyring 00:15:23.356 15:00:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:23.356 15:00:42 -- nvmf/common.sh@123 -- # set -e 00:15:23.356 15:00:42 -- nvmf/common.sh@124 -- # return 0 00:15:23.356 15:00:42 -- nvmf/common.sh@477 -- # '[' -n 3215084 ']' 00:15:23.356 15:00:42 -- nvmf/common.sh@478 -- # killprocess 3215084 00:15:23.356 15:00:42 -- common/autotest_common.sh@926 -- # '[' -z 3215084 ']' 00:15:23.356 15:00:42 -- common/autotest_common.sh@930 -- # kill -0 3215084 00:15:23.356 15:00:42 -- common/autotest_common.sh@931 -- # uname 00:15:23.614 15:00:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:23.614 15:00:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3215084 00:15:23.614 15:00:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:23.614 15:00:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:23.614 15:00:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3215084' 00:15:23.614 killing process with pid 3215084 00:15:23.614 15:00:42 -- common/autotest_common.sh@945 -- # kill 3215084 00:15:23.614 15:00:42 -- common/autotest_common.sh@950 -- # wait 3215084 00:15:23.874 15:00:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:23.874 15:00:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:23.874 15:00:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:23.874 15:00:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.874 15:00:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:23.874 15:00:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.874 15:00:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.874 15:00:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.779 15:00:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:25.779 00:15:25.779 real 0m20.398s 00:15:25.779 user 0m42.637s 00:15:25.779 sys 0m8.530s 00:15:25.779 15:00:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.779 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:15:25.779 ************************************ 00:15:25.779 END TEST nvmf_connect_stress 00:15:25.779 ************************************ 00:15:25.779 15:00:44 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:25.779 15:00:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:25.779 15:00:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:25.779 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:15:25.779 ************************************ 00:15:25.779 START TEST nvmf_fused_ordering 00:15:25.779 ************************************ 00:15:25.779 15:00:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:26.039 * Looking for test storage... 00:15:26.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.039 15:00:44 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.039 15:00:44 -- nvmf/common.sh@7 -- # uname -s 00:15:26.039 15:00:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.039 15:00:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.039 15:00:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.039 15:00:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.039 15:00:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.039 15:00:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.039 15:00:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.039 15:00:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.039 15:00:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.039 15:00:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.039 15:00:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:26.039 15:00:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:26.039 15:00:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.039 15:00:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.039 15:00:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.039 15:00:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.039 15:00:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.039 15:00:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.039 15:00:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.039 15:00:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.039 15:00:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.039 15:00:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.039 15:00:44 -- paths/export.sh@5 -- # export PATH 00:15:26.039 15:00:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.039 15:00:44 -- nvmf/common.sh@46 -- # : 0 00:15:26.039 15:00:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:26.039 15:00:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:26.039 15:00:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:26.039 15:00:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.039 15:00:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.039 15:00:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:26.039 15:00:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:26.039 15:00:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:26.039 15:00:44 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:26.039 15:00:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:26.039 15:00:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.039 15:00:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:26.039 15:00:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:26.039 15:00:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:26.039 15:00:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.039 15:00:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.039 15:00:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.039 15:00:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:26.039 15:00:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:26.039 15:00:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:26.039 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:15:32.608 15:00:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:32.608 15:00:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:32.608 15:00:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:32.608 15:00:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:32.608 15:00:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:32.608 15:00:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:32.608 15:00:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:32.608 15:00:50 -- nvmf/common.sh@294 -- # net_devs=() 00:15:32.608 15:00:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:32.608 15:00:50 -- nvmf/common.sh@295 -- # e810=() 00:15:32.608 15:00:50 -- nvmf/common.sh@295 -- # local -ga e810 00:15:32.608 15:00:50 -- nvmf/common.sh@296 -- # x722=() 00:15:32.608 15:00:50 -- nvmf/common.sh@296 -- # local -ga x722 00:15:32.608 15:00:50 -- nvmf/common.sh@297 -- # mlx=() 00:15:32.608 15:00:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:32.608 15:00:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.608 15:00:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.608 15:00:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.608 15:00:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.608 15:00:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.608 15:00:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.608 15:00:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.608 15:00:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.609 15:00:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.609 15:00:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.609 15:00:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.609 15:00:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:32.609 15:00:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:32.609 15:00:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:32.609 15:00:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:32.609 15:00:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:32.609 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:32.609 15:00:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:32.609 15:00:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:32.609 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:32.609 15:00:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:32.609 15:00:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:32.609 15:00:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.609 15:00:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:32.609 15:00:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.609 15:00:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:32.609 Found net devices under 0000:af:00.0: cvl_0_0 00:15:32.609 15:00:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.609 15:00:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:32.609 15:00:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.609 15:00:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:32.609 15:00:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.609 15:00:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:32.609 Found net devices under 0000:af:00.1: cvl_0_1 00:15:32.609 15:00:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.609 15:00:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:32.609 15:00:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:32.609 15:00:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:32.609 15:00:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.609 15:00:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.609 15:00:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.609 15:00:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:32.609 15:00:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.609 15:00:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.609 15:00:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:32.609 15:00:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.609 15:00:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.609 15:00:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:32.609 15:00:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:32.609 15:00:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.609 15:00:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.609 15:00:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.609 15:00:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.609 15:00:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:32.609 15:00:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.609 15:00:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.609 15:00:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.609 15:00:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:32.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:15:32.609 00:15:32.609 --- 10.0.0.2 ping statistics --- 00:15:32.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.609 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:15:32.609 15:00:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:15:32.609 00:15:32.609 --- 10.0.0.1 ping statistics --- 00:15:32.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.609 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:15:32.609 15:00:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.609 15:00:50 -- nvmf/common.sh@410 -- # return 0 00:15:32.609 15:00:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:32.609 15:00:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.609 15:00:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:32.609 15:00:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.609 15:00:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:32.609 15:00:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:32.609 15:00:50 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:32.609 15:00:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:32.609 15:00:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:32.609 15:00:50 -- common/autotest_common.sh@10 -- # set +x 00:15:32.609 15:00:50 -- nvmf/common.sh@469 -- # nvmfpid=3221271 00:15:32.609 15:00:50 -- nvmf/common.sh@470 -- # waitforlisten 3221271 00:15:32.609 15:00:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.609 15:00:50 -- common/autotest_common.sh@819 -- # '[' -z 3221271 ']' 00:15:32.609 15:00:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.609 15:00:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:32.609 15:00:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.609 15:00:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:32.609 15:00:50 -- common/autotest_common.sh@10 -- # set +x 00:15:32.609 [2024-06-11 15:00:50.887784] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:32.609 [2024-06-11 15:00:50.887841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.609 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.609 [2024-06-11 15:00:50.975396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.609 [2024-06-11 15:00:51.062069] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:32.609 [2024-06-11 15:00:51.062209] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.609 [2024-06-11 15:00:51.062222] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.609 [2024-06-11 15:00:51.062231] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.609 [2024-06-11 15:00:51.062259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.177 15:00:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:33.177 15:00:51 -- common/autotest_common.sh@852 -- # return 0 00:15:33.177 15:00:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:33.177 15:00:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:33.177 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 15:00:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.177 15:00:51 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.177 15:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.177 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 [2024-06-11 15:00:51.855991] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.177 15:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.177 15:00:51 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:33.177 15:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.177 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 15:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.177 15:00:51 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.177 15:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.177 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 [2024-06-11 15:00:51.876150] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.177 15:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.177 15:00:51 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:33.177 15:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.177 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 NULL1 00:15:33.177 15:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.177 15:00:51 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:33.177 15:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.177 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 15:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.177 15:00:51 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:33.177 15:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.177 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 15:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.177 15:00:51 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:33.177 [2024-06-11 15:00:51.928627] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:33.177 [2024-06-11 15:00:51.928658] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221555 ] 00:15:33.177 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.121 Attached to nqn.2016-06.io.spdk:cnode1 00:15:34.121 Namespace ID: 1 size: 1GB 00:15:34.121 fused_ordering(0) 00:15:34.121 fused_ordering(1) 00:15:34.121 fused_ordering(2) 00:15:34.121 fused_ordering(3) 00:15:34.121 fused_ordering(4) 00:15:34.121 fused_ordering(5) 00:15:34.121 fused_ordering(6) 00:15:34.121 fused_ordering(7) 00:15:34.121 fused_ordering(8) 00:15:34.121 fused_ordering(9) 00:15:34.121 fused_ordering(10) 00:15:34.121 fused_ordering(11) 00:15:34.121 fused_ordering(12) 00:15:34.121 fused_ordering(13) 00:15:34.121 fused_ordering(14) 00:15:34.121 fused_ordering(15) 00:15:34.121 fused_ordering(16) 00:15:34.121 fused_ordering(17) 00:15:34.121 fused_ordering(18) 00:15:34.121 fused_ordering(19) 00:15:34.121 fused_ordering(20) 00:15:34.121 fused_ordering(21) 00:15:34.121 fused_ordering(22) 00:15:34.121 fused_ordering(23) 00:15:34.121 fused_ordering(24) 00:15:34.121 fused_ordering(25) 00:15:34.121 fused_ordering(26) 00:15:34.121 fused_ordering(27) 00:15:34.121 fused_ordering(28) 00:15:34.121 fused_ordering(29) 00:15:34.121 fused_ordering(30) 00:15:34.121 fused_ordering(31) 00:15:34.121 fused_ordering(32) 00:15:34.121 fused_ordering(33) 00:15:34.121 fused_ordering(34) 00:15:34.121 fused_ordering(35) 00:15:34.121 fused_ordering(36) 00:15:34.121 fused_ordering(37) 00:15:34.121 fused_ordering(38) 00:15:34.121 fused_ordering(39) 00:15:34.121 fused_ordering(40) 00:15:34.121 fused_ordering(41) 00:15:34.121 fused_ordering(42) 00:15:34.121 fused_ordering(43) 00:15:34.121 fused_ordering(44) 00:15:34.121 fused_ordering(45) 00:15:34.121 fused_ordering(46) 00:15:34.121 fused_ordering(47) 00:15:34.121 fused_ordering(48) 00:15:34.121 fused_ordering(49) 00:15:34.121 fused_ordering(50) 00:15:34.121 fused_ordering(51) 00:15:34.121 fused_ordering(52) 00:15:34.121 fused_ordering(53) 00:15:34.121 fused_ordering(54) 00:15:34.121 fused_ordering(55) 00:15:34.121 fused_ordering(56) 00:15:34.121 fused_ordering(57) 00:15:34.121 fused_ordering(58) 00:15:34.121 fused_ordering(59) 00:15:34.121 fused_ordering(60) 00:15:34.121 fused_ordering(61) 00:15:34.121 fused_ordering(62) 00:15:34.121 fused_ordering(63) 00:15:34.121 fused_ordering(64) 00:15:34.121 fused_ordering(65) 00:15:34.121 fused_ordering(66) 00:15:34.121 fused_ordering(67) 00:15:34.121 fused_ordering(68) 00:15:34.121 fused_ordering(69) 00:15:34.121 fused_ordering(70) 00:15:34.121 fused_ordering(71) 00:15:34.121 fused_ordering(72) 00:15:34.121 fused_ordering(73) 00:15:34.121 fused_ordering(74) 00:15:34.121 fused_ordering(75) 00:15:34.121 fused_ordering(76) 00:15:34.121 fused_ordering(77) 00:15:34.121 fused_ordering(78) 00:15:34.121 fused_ordering(79) 00:15:34.121 fused_ordering(80) 00:15:34.121 fused_ordering(81) 00:15:34.121 fused_ordering(82) 00:15:34.121 fused_ordering(83) 00:15:34.121 fused_ordering(84) 00:15:34.121 fused_ordering(85) 00:15:34.121 fused_ordering(86) 00:15:34.121 fused_ordering(87) 00:15:34.121 fused_ordering(88) 00:15:34.121 fused_ordering(89) 00:15:34.121 fused_ordering(90) 00:15:34.121 fused_ordering(91) 00:15:34.121 fused_ordering(92) 00:15:34.121 fused_ordering(93) 00:15:34.121 fused_ordering(94) 00:15:34.121 fused_ordering(95) 00:15:34.121 fused_ordering(96) 00:15:34.121 fused_ordering(97) 00:15:34.121 fused_ordering(98) 00:15:34.121 fused_ordering(99) 00:15:34.121 fused_ordering(100) 00:15:34.121 fused_ordering(101) 00:15:34.121 fused_ordering(102) 00:15:34.121 fused_ordering(103) 00:15:34.121 fused_ordering(104) 00:15:34.121 fused_ordering(105) 00:15:34.121 fused_ordering(106) 00:15:34.121 fused_ordering(107) 00:15:34.121 fused_ordering(108) 00:15:34.121 fused_ordering(109) 00:15:34.121 fused_ordering(110) 00:15:34.121 fused_ordering(111) 00:15:34.121 fused_ordering(112) 00:15:34.121 fused_ordering(113) 00:15:34.121 fused_ordering(114) 00:15:34.121 fused_ordering(115) 00:15:34.121 fused_ordering(116) 00:15:34.121 fused_ordering(117) 00:15:34.121 fused_ordering(118) 00:15:34.121 fused_ordering(119) 00:15:34.121 fused_ordering(120) 00:15:34.121 fused_ordering(121) 00:15:34.121 fused_ordering(122) 00:15:34.121 fused_ordering(123) 00:15:34.121 fused_ordering(124) 00:15:34.121 fused_ordering(125) 00:15:34.121 fused_ordering(126) 00:15:34.121 fused_ordering(127) 00:15:34.121 fused_ordering(128) 00:15:34.121 fused_ordering(129) 00:15:34.121 fused_ordering(130) 00:15:34.121 fused_ordering(131) 00:15:34.121 fused_ordering(132) 00:15:34.121 fused_ordering(133) 00:15:34.121 fused_ordering(134) 00:15:34.121 fused_ordering(135) 00:15:34.121 fused_ordering(136) 00:15:34.121 fused_ordering(137) 00:15:34.121 fused_ordering(138) 00:15:34.121 fused_ordering(139) 00:15:34.121 fused_ordering(140) 00:15:34.121 fused_ordering(141) 00:15:34.121 fused_ordering(142) 00:15:34.121 fused_ordering(143) 00:15:34.121 fused_ordering(144) 00:15:34.121 fused_ordering(145) 00:15:34.121 fused_ordering(146) 00:15:34.121 fused_ordering(147) 00:15:34.121 fused_ordering(148) 00:15:34.121 fused_ordering(149) 00:15:34.121 fused_ordering(150) 00:15:34.121 fused_ordering(151) 00:15:34.121 fused_ordering(152) 00:15:34.121 fused_ordering(153) 00:15:34.121 fused_ordering(154) 00:15:34.121 fused_ordering(155) 00:15:34.121 fused_ordering(156) 00:15:34.121 fused_ordering(157) 00:15:34.121 fused_ordering(158) 00:15:34.121 fused_ordering(159) 00:15:34.121 fused_ordering(160) 00:15:34.121 fused_ordering(161) 00:15:34.121 fused_ordering(162) 00:15:34.121 fused_ordering(163) 00:15:34.121 fused_ordering(164) 00:15:34.121 fused_ordering(165) 00:15:34.121 fused_ordering(166) 00:15:34.121 fused_ordering(167) 00:15:34.121 fused_ordering(168) 00:15:34.121 fused_ordering(169) 00:15:34.121 fused_ordering(170) 00:15:34.121 fused_ordering(171) 00:15:34.121 fused_ordering(172) 00:15:34.121 fused_ordering(173) 00:15:34.121 fused_ordering(174) 00:15:34.121 fused_ordering(175) 00:15:34.121 fused_ordering(176) 00:15:34.121 fused_ordering(177) 00:15:34.121 fused_ordering(178) 00:15:34.121 fused_ordering(179) 00:15:34.121 fused_ordering(180) 00:15:34.121 fused_ordering(181) 00:15:34.121 fused_ordering(182) 00:15:34.121 fused_ordering(183) 00:15:34.121 fused_ordering(184) 00:15:34.121 fused_ordering(185) 00:15:34.121 fused_ordering(186) 00:15:34.121 fused_ordering(187) 00:15:34.121 fused_ordering(188) 00:15:34.121 fused_ordering(189) 00:15:34.121 fused_ordering(190) 00:15:34.122 fused_ordering(191) 00:15:34.122 fused_ordering(192) 00:15:34.122 fused_ordering(193) 00:15:34.122 fused_ordering(194) 00:15:34.122 fused_ordering(195) 00:15:34.122 fused_ordering(196) 00:15:34.122 fused_ordering(197) 00:15:34.122 fused_ordering(198) 00:15:34.122 fused_ordering(199) 00:15:34.122 fused_ordering(200) 00:15:34.122 fused_ordering(201) 00:15:34.122 fused_ordering(202) 00:15:34.122 fused_ordering(203) 00:15:34.122 fused_ordering(204) 00:15:34.122 fused_ordering(205) 00:15:34.380 fused_ordering(206) 00:15:34.380 fused_ordering(207) 00:15:34.380 fused_ordering(208) 00:15:34.380 fused_ordering(209) 00:15:34.380 fused_ordering(210) 00:15:34.380 fused_ordering(211) 00:15:34.380 fused_ordering(212) 00:15:34.380 fused_ordering(213) 00:15:34.380 fused_ordering(214) 00:15:34.380 fused_ordering(215) 00:15:34.380 fused_ordering(216) 00:15:34.380 fused_ordering(217) 00:15:34.380 fused_ordering(218) 00:15:34.380 fused_ordering(219) 00:15:34.380 fused_ordering(220) 00:15:34.380 fused_ordering(221) 00:15:34.380 fused_ordering(222) 00:15:34.380 fused_ordering(223) 00:15:34.380 fused_ordering(224) 00:15:34.380 fused_ordering(225) 00:15:34.380 fused_ordering(226) 00:15:34.380 fused_ordering(227) 00:15:34.380 fused_ordering(228) 00:15:34.380 fused_ordering(229) 00:15:34.380 fused_ordering(230) 00:15:34.380 fused_ordering(231) 00:15:34.380 fused_ordering(232) 00:15:34.380 fused_ordering(233) 00:15:34.380 fused_ordering(234) 00:15:34.380 fused_ordering(235) 00:15:34.380 fused_ordering(236) 00:15:34.380 fused_ordering(237) 00:15:34.380 fused_ordering(238) 00:15:34.380 fused_ordering(239) 00:15:34.380 fused_ordering(240) 00:15:34.380 fused_ordering(241) 00:15:34.380 fused_ordering(242) 00:15:34.380 fused_ordering(243) 00:15:34.380 fused_ordering(244) 00:15:34.380 fused_ordering(245) 00:15:34.380 fused_ordering(246) 00:15:34.380 fused_ordering(247) 00:15:34.380 fused_ordering(248) 00:15:34.380 fused_ordering(249) 00:15:34.380 fused_ordering(250) 00:15:34.380 fused_ordering(251) 00:15:34.380 fused_ordering(252) 00:15:34.380 fused_ordering(253) 00:15:34.380 fused_ordering(254) 00:15:34.380 fused_ordering(255) 00:15:34.380 fused_ordering(256) 00:15:34.380 fused_ordering(257) 00:15:34.380 fused_ordering(258) 00:15:34.380 fused_ordering(259) 00:15:34.380 fused_ordering(260) 00:15:34.380 fused_ordering(261) 00:15:34.380 fused_ordering(262) 00:15:34.380 fused_ordering(263) 00:15:34.380 fused_ordering(264) 00:15:34.380 fused_ordering(265) 00:15:34.380 fused_ordering(266) 00:15:34.380 fused_ordering(267) 00:15:34.380 fused_ordering(268) 00:15:34.380 fused_ordering(269) 00:15:34.380 fused_ordering(270) 00:15:34.380 fused_ordering(271) 00:15:34.380 fused_ordering(272) 00:15:34.380 fused_ordering(273) 00:15:34.380 fused_ordering(274) 00:15:34.380 fused_ordering(275) 00:15:34.380 fused_ordering(276) 00:15:34.380 fused_ordering(277) 00:15:34.380 fused_ordering(278) 00:15:34.380 fused_ordering(279) 00:15:34.380 fused_ordering(280) 00:15:34.380 fused_ordering(281) 00:15:34.380 fused_ordering(282) 00:15:34.380 fused_ordering(283) 00:15:34.380 fused_ordering(284) 00:15:34.380 fused_ordering(285) 00:15:34.380 fused_ordering(286) 00:15:34.380 fused_ordering(287) 00:15:34.380 fused_ordering(288) 00:15:34.380 fused_ordering(289) 00:15:34.380 fused_ordering(290) 00:15:34.380 fused_ordering(291) 00:15:34.380 fused_ordering(292) 00:15:34.380 fused_ordering(293) 00:15:34.380 fused_ordering(294) 00:15:34.380 fused_ordering(295) 00:15:34.380 fused_ordering(296) 00:15:34.380 fused_ordering(297) 00:15:34.380 fused_ordering(298) 00:15:34.380 fused_ordering(299) 00:15:34.380 fused_ordering(300) 00:15:34.380 fused_ordering(301) 00:15:34.380 fused_ordering(302) 00:15:34.380 fused_ordering(303) 00:15:34.380 fused_ordering(304) 00:15:34.380 fused_ordering(305) 00:15:34.380 fused_ordering(306) 00:15:34.380 fused_ordering(307) 00:15:34.380 fused_ordering(308) 00:15:34.380 fused_ordering(309) 00:15:34.380 fused_ordering(310) 00:15:34.380 fused_ordering(311) 00:15:34.381 fused_ordering(312) 00:15:34.381 fused_ordering(313) 00:15:34.381 fused_ordering(314) 00:15:34.381 fused_ordering(315) 00:15:34.381 fused_ordering(316) 00:15:34.381 fused_ordering(317) 00:15:34.381 fused_ordering(318) 00:15:34.381 fused_ordering(319) 00:15:34.381 fused_ordering(320) 00:15:34.381 fused_ordering(321) 00:15:34.381 fused_ordering(322) 00:15:34.381 fused_ordering(323) 00:15:34.381 fused_ordering(324) 00:15:34.381 fused_ordering(325) 00:15:34.381 fused_ordering(326) 00:15:34.381 fused_ordering(327) 00:15:34.381 fused_ordering(328) 00:15:34.381 fused_ordering(329) 00:15:34.381 fused_ordering(330) 00:15:34.381 fused_ordering(331) 00:15:34.381 fused_ordering(332) 00:15:34.381 fused_ordering(333) 00:15:34.381 fused_ordering(334) 00:15:34.381 fused_ordering(335) 00:15:34.381 fused_ordering(336) 00:15:34.381 fused_ordering(337) 00:15:34.381 fused_ordering(338) 00:15:34.381 fused_ordering(339) 00:15:34.381 fused_ordering(340) 00:15:34.381 fused_ordering(341) 00:15:34.381 fused_ordering(342) 00:15:34.381 fused_ordering(343) 00:15:34.381 fused_ordering(344) 00:15:34.381 fused_ordering(345) 00:15:34.381 fused_ordering(346) 00:15:34.381 fused_ordering(347) 00:15:34.381 fused_ordering(348) 00:15:34.381 fused_ordering(349) 00:15:34.381 fused_ordering(350) 00:15:34.381 fused_ordering(351) 00:15:34.381 fused_ordering(352) 00:15:34.381 fused_ordering(353) 00:15:34.381 fused_ordering(354) 00:15:34.381 fused_ordering(355) 00:15:34.381 fused_ordering(356) 00:15:34.381 fused_ordering(357) 00:15:34.381 fused_ordering(358) 00:15:34.381 fused_ordering(359) 00:15:34.381 fused_ordering(360) 00:15:34.381 fused_ordering(361) 00:15:34.381 fused_ordering(362) 00:15:34.381 fused_ordering(363) 00:15:34.381 fused_ordering(364) 00:15:34.381 fused_ordering(365) 00:15:34.381 fused_ordering(366) 00:15:34.381 fused_ordering(367) 00:15:34.381 fused_ordering(368) 00:15:34.381 fused_ordering(369) 00:15:34.381 fused_ordering(370) 00:15:34.381 fused_ordering(371) 00:15:34.381 fused_ordering(372) 00:15:34.381 fused_ordering(373) 00:15:34.381 fused_ordering(374) 00:15:34.381 fused_ordering(375) 00:15:34.381 fused_ordering(376) 00:15:34.381 fused_ordering(377) 00:15:34.381 fused_ordering(378) 00:15:34.381 fused_ordering(379) 00:15:34.381 fused_ordering(380) 00:15:34.381 fused_ordering(381) 00:15:34.381 fused_ordering(382) 00:15:34.381 fused_ordering(383) 00:15:34.381 fused_ordering(384) 00:15:34.381 fused_ordering(385) 00:15:34.381 fused_ordering(386) 00:15:34.381 fused_ordering(387) 00:15:34.381 fused_ordering(388) 00:15:34.381 fused_ordering(389) 00:15:34.381 fused_ordering(390) 00:15:34.381 fused_ordering(391) 00:15:34.381 fused_ordering(392) 00:15:34.381 fused_ordering(393) 00:15:34.381 fused_ordering(394) 00:15:34.381 fused_ordering(395) 00:15:34.381 fused_ordering(396) 00:15:34.381 fused_ordering(397) 00:15:34.381 fused_ordering(398) 00:15:34.381 fused_ordering(399) 00:15:34.381 fused_ordering(400) 00:15:34.381 fused_ordering(401) 00:15:34.381 fused_ordering(402) 00:15:34.381 fused_ordering(403) 00:15:34.381 fused_ordering(404) 00:15:34.381 fused_ordering(405) 00:15:34.381 fused_ordering(406) 00:15:34.381 fused_ordering(407) 00:15:34.381 fused_ordering(408) 00:15:34.381 fused_ordering(409) 00:15:34.381 fused_ordering(410) 00:15:34.948 fused_ordering(411) 00:15:34.948 fused_ordering(412) 00:15:34.948 fused_ordering(413) 00:15:34.948 fused_ordering(414) 00:15:34.948 fused_ordering(415) 00:15:34.948 fused_ordering(416) 00:15:34.948 fused_ordering(417) 00:15:34.948 fused_ordering(418) 00:15:34.948 fused_ordering(419) 00:15:34.948 fused_ordering(420) 00:15:34.948 fused_ordering(421) 00:15:34.948 fused_ordering(422) 00:15:34.948 fused_ordering(423) 00:15:34.948 fused_ordering(424) 00:15:34.948 fused_ordering(425) 00:15:34.948 fused_ordering(426) 00:15:34.948 fused_ordering(427) 00:15:34.948 fused_ordering(428) 00:15:34.948 fused_ordering(429) 00:15:34.948 fused_ordering(430) 00:15:34.948 fused_ordering(431) 00:15:34.948 fused_ordering(432) 00:15:34.948 fused_ordering(433) 00:15:34.948 fused_ordering(434) 00:15:34.948 fused_ordering(435) 00:15:34.948 fused_ordering(436) 00:15:34.948 fused_ordering(437) 00:15:34.948 fused_ordering(438) 00:15:34.948 fused_ordering(439) 00:15:34.948 fused_ordering(440) 00:15:34.948 fused_ordering(441) 00:15:34.948 fused_ordering(442) 00:15:34.948 fused_ordering(443) 00:15:34.948 fused_ordering(444) 00:15:34.948 fused_ordering(445) 00:15:34.948 fused_ordering(446) 00:15:34.948 fused_ordering(447) 00:15:34.948 fused_ordering(448) 00:15:34.948 fused_ordering(449) 00:15:34.948 fused_ordering(450) 00:15:34.948 fused_ordering(451) 00:15:34.948 fused_ordering(452) 00:15:34.948 fused_ordering(453) 00:15:34.948 fused_ordering(454) 00:15:34.948 fused_ordering(455) 00:15:34.948 fused_ordering(456) 00:15:34.948 fused_ordering(457) 00:15:34.948 fused_ordering(458) 00:15:34.948 fused_ordering(459) 00:15:34.948 fused_ordering(460) 00:15:34.948 fused_ordering(461) 00:15:34.948 fused_ordering(462) 00:15:34.948 fused_ordering(463) 00:15:34.948 fused_ordering(464) 00:15:34.948 fused_ordering(465) 00:15:34.948 fused_ordering(466) 00:15:34.948 fused_ordering(467) 00:15:34.948 fused_ordering(468) 00:15:34.948 fused_ordering(469) 00:15:34.948 fused_ordering(470) 00:15:34.948 fused_ordering(471) 00:15:34.948 fused_ordering(472) 00:15:34.948 fused_ordering(473) 00:15:34.948 fused_ordering(474) 00:15:34.948 fused_ordering(475) 00:15:34.948 fused_ordering(476) 00:15:34.948 fused_ordering(477) 00:15:34.948 fused_ordering(478) 00:15:34.948 fused_ordering(479) 00:15:34.948 fused_ordering(480) 00:15:34.948 fused_ordering(481) 00:15:34.948 fused_ordering(482) 00:15:34.948 fused_ordering(483) 00:15:34.948 fused_ordering(484) 00:15:34.948 fused_ordering(485) 00:15:34.948 fused_ordering(486) 00:15:34.948 fused_ordering(487) 00:15:34.948 fused_ordering(488) 00:15:34.948 fused_ordering(489) 00:15:34.948 fused_ordering(490) 00:15:34.948 fused_ordering(491) 00:15:34.948 fused_ordering(492) 00:15:34.948 fused_ordering(493) 00:15:34.948 fused_ordering(494) 00:15:34.948 fused_ordering(495) 00:15:34.948 fused_ordering(496) 00:15:34.948 fused_ordering(497) 00:15:34.948 fused_ordering(498) 00:15:34.948 fused_ordering(499) 00:15:34.948 fused_ordering(500) 00:15:34.948 fused_ordering(501) 00:15:34.948 fused_ordering(502) 00:15:34.948 fused_ordering(503) 00:15:34.948 fused_ordering(504) 00:15:34.948 fused_ordering(505) 00:15:34.948 fused_ordering(506) 00:15:34.948 fused_ordering(507) 00:15:34.948 fused_ordering(508) 00:15:34.948 fused_ordering(509) 00:15:34.948 fused_ordering(510) 00:15:34.948 fused_ordering(511) 00:15:34.948 fused_ordering(512) 00:15:34.948 fused_ordering(513) 00:15:34.948 fused_ordering(514) 00:15:34.948 fused_ordering(515) 00:15:34.948 fused_ordering(516) 00:15:34.948 fused_ordering(517) 00:15:34.948 fused_ordering(518) 00:15:34.948 fused_ordering(519) 00:15:34.948 fused_ordering(520) 00:15:34.948 fused_ordering(521) 00:15:34.948 fused_ordering(522) 00:15:34.948 fused_ordering(523) 00:15:34.948 fused_ordering(524) 00:15:34.948 fused_ordering(525) 00:15:34.948 fused_ordering(526) 00:15:34.948 fused_ordering(527) 00:15:34.948 fused_ordering(528) 00:15:34.948 fused_ordering(529) 00:15:34.948 fused_ordering(530) 00:15:34.948 fused_ordering(531) 00:15:34.948 fused_ordering(532) 00:15:34.948 fused_ordering(533) 00:15:34.948 fused_ordering(534) 00:15:34.948 fused_ordering(535) 00:15:34.948 fused_ordering(536) 00:15:34.948 fused_ordering(537) 00:15:34.948 fused_ordering(538) 00:15:34.948 fused_ordering(539) 00:15:34.948 fused_ordering(540) 00:15:34.948 fused_ordering(541) 00:15:34.948 fused_ordering(542) 00:15:34.948 fused_ordering(543) 00:15:34.948 fused_ordering(544) 00:15:34.948 fused_ordering(545) 00:15:34.948 fused_ordering(546) 00:15:34.948 fused_ordering(547) 00:15:34.948 fused_ordering(548) 00:15:34.948 fused_ordering(549) 00:15:34.948 fused_ordering(550) 00:15:34.948 fused_ordering(551) 00:15:34.948 fused_ordering(552) 00:15:34.948 fused_ordering(553) 00:15:34.948 fused_ordering(554) 00:15:34.948 fused_ordering(555) 00:15:34.948 fused_ordering(556) 00:15:34.948 fused_ordering(557) 00:15:34.948 fused_ordering(558) 00:15:34.948 fused_ordering(559) 00:15:34.948 fused_ordering(560) 00:15:34.948 fused_ordering(561) 00:15:34.948 fused_ordering(562) 00:15:34.948 fused_ordering(563) 00:15:34.948 fused_ordering(564) 00:15:34.948 fused_ordering(565) 00:15:34.948 fused_ordering(566) 00:15:34.948 fused_ordering(567) 00:15:34.948 fused_ordering(568) 00:15:34.948 fused_ordering(569) 00:15:34.948 fused_ordering(570) 00:15:34.948 fused_ordering(571) 00:15:34.948 fused_ordering(572) 00:15:34.948 fused_ordering(573) 00:15:34.948 fused_ordering(574) 00:15:34.948 fused_ordering(575) 00:15:34.948 fused_ordering(576) 00:15:34.948 fused_ordering(577) 00:15:34.948 fused_ordering(578) 00:15:34.948 fused_ordering(579) 00:15:34.948 fused_ordering(580) 00:15:34.948 fused_ordering(581) 00:15:34.948 fused_ordering(582) 00:15:34.948 fused_ordering(583) 00:15:34.948 fused_ordering(584) 00:15:34.948 fused_ordering(585) 00:15:34.948 fused_ordering(586) 00:15:34.948 fused_ordering(587) 00:15:34.948 fused_ordering(588) 00:15:34.948 fused_ordering(589) 00:15:34.948 fused_ordering(590) 00:15:34.948 fused_ordering(591) 00:15:34.948 fused_ordering(592) 00:15:34.948 fused_ordering(593) 00:15:34.948 fused_ordering(594) 00:15:34.948 fused_ordering(595) 00:15:34.948 fused_ordering(596) 00:15:34.948 fused_ordering(597) 00:15:34.948 fused_ordering(598) 00:15:34.948 fused_ordering(599) 00:15:34.948 fused_ordering(600) 00:15:34.948 fused_ordering(601) 00:15:34.948 fused_ordering(602) 00:15:34.948 fused_ordering(603) 00:15:34.948 fused_ordering(604) 00:15:34.948 fused_ordering(605) 00:15:34.948 fused_ordering(606) 00:15:34.948 fused_ordering(607) 00:15:34.948 fused_ordering(608) 00:15:34.948 fused_ordering(609) 00:15:34.948 fused_ordering(610) 00:15:34.949 fused_ordering(611) 00:15:34.949 fused_ordering(612) 00:15:34.949 fused_ordering(613) 00:15:34.949 fused_ordering(614) 00:15:34.949 fused_ordering(615) 00:15:35.884 fused_ordering(616) 00:15:35.884 fused_ordering(617) 00:15:35.884 fused_ordering(618) 00:15:35.884 fused_ordering(619) 00:15:35.884 fused_ordering(620) 00:15:35.884 fused_ordering(621) 00:15:35.884 fused_ordering(622) 00:15:35.884 fused_ordering(623) 00:15:35.884 fused_ordering(624) 00:15:35.884 fused_ordering(625) 00:15:35.884 fused_ordering(626) 00:15:35.884 fused_ordering(627) 00:15:35.884 fused_ordering(628) 00:15:35.884 fused_ordering(629) 00:15:35.884 fused_ordering(630) 00:15:35.884 fused_ordering(631) 00:15:35.884 fused_ordering(632) 00:15:35.884 fused_ordering(633) 00:15:35.884 fused_ordering(634) 00:15:35.884 fused_ordering(635) 00:15:35.884 fused_ordering(636) 00:15:35.884 fused_ordering(637) 00:15:35.884 fused_ordering(638) 00:15:35.884 fused_ordering(639) 00:15:35.884 fused_ordering(640) 00:15:35.884 fused_ordering(641) 00:15:35.884 fused_ordering(642) 00:15:35.884 fused_ordering(643) 00:15:35.884 fused_ordering(644) 00:15:35.884 fused_ordering(645) 00:15:35.884 fused_ordering(646) 00:15:35.884 fused_ordering(647) 00:15:35.884 fused_ordering(648) 00:15:35.884 fused_ordering(649) 00:15:35.884 fused_ordering(650) 00:15:35.884 fused_ordering(651) 00:15:35.884 fused_ordering(652) 00:15:35.884 fused_ordering(653) 00:15:35.884 fused_ordering(654) 00:15:35.884 fused_ordering(655) 00:15:35.884 fused_ordering(656) 00:15:35.884 fused_ordering(657) 00:15:35.884 fused_ordering(658) 00:15:35.884 fused_ordering(659) 00:15:35.884 fused_ordering(660) 00:15:35.884 fused_ordering(661) 00:15:35.884 fused_ordering(662) 00:15:35.884 fused_ordering(663) 00:15:35.884 fused_ordering(664) 00:15:35.884 fused_ordering(665) 00:15:35.884 fused_ordering(666) 00:15:35.884 fused_ordering(667) 00:15:35.884 fused_ordering(668) 00:15:35.884 fused_ordering(669) 00:15:35.884 fused_ordering(670) 00:15:35.884 fused_ordering(671) 00:15:35.884 fused_ordering(672) 00:15:35.884 fused_ordering(673) 00:15:35.884 fused_ordering(674) 00:15:35.884 fused_ordering(675) 00:15:35.884 fused_ordering(676) 00:15:35.884 fused_ordering(677) 00:15:35.884 fused_ordering(678) 00:15:35.884 fused_ordering(679) 00:15:35.884 fused_ordering(680) 00:15:35.884 fused_ordering(681) 00:15:35.884 fused_ordering(682) 00:15:35.884 fused_ordering(683) 00:15:35.884 fused_ordering(684) 00:15:35.884 fused_ordering(685) 00:15:35.884 fused_ordering(686) 00:15:35.884 fused_ordering(687) 00:15:35.884 fused_ordering(688) 00:15:35.884 fused_ordering(689) 00:15:35.884 fused_ordering(690) 00:15:35.884 fused_ordering(691) 00:15:35.884 fused_ordering(692) 00:15:35.884 fused_ordering(693) 00:15:35.884 fused_ordering(694) 00:15:35.884 fused_ordering(695) 00:15:35.884 fused_ordering(696) 00:15:35.884 fused_ordering(697) 00:15:35.884 fused_ordering(698) 00:15:35.884 fused_ordering(699) 00:15:35.884 fused_ordering(700) 00:15:35.884 fused_ordering(701) 00:15:35.884 fused_ordering(702) 00:15:35.884 fused_ordering(703) 00:15:35.884 fused_ordering(704) 00:15:35.884 fused_ordering(705) 00:15:35.884 fused_ordering(706) 00:15:35.884 fused_ordering(707) 00:15:35.884 fused_ordering(708) 00:15:35.884 fused_ordering(709) 00:15:35.884 fused_ordering(710) 00:15:35.884 fused_ordering(711) 00:15:35.884 fused_ordering(712) 00:15:35.884 fused_ordering(713) 00:15:35.884 fused_ordering(714) 00:15:35.884 fused_ordering(715) 00:15:35.884 fused_ordering(716) 00:15:35.884 fused_ordering(717) 00:15:35.884 fused_ordering(718) 00:15:35.884 fused_ordering(719) 00:15:35.884 fused_ordering(720) 00:15:35.884 fused_ordering(721) 00:15:35.884 fused_ordering(722) 00:15:35.884 fused_ordering(723) 00:15:35.884 fused_ordering(724) 00:15:35.884 fused_ordering(725) 00:15:35.884 fused_ordering(726) 00:15:35.884 fused_ordering(727) 00:15:35.884 fused_ordering(728) 00:15:35.884 fused_ordering(729) 00:15:35.884 fused_ordering(730) 00:15:35.884 fused_ordering(731) 00:15:35.884 fused_ordering(732) 00:15:35.884 fused_ordering(733) 00:15:35.884 fused_ordering(734) 00:15:35.884 fused_ordering(735) 00:15:35.884 fused_ordering(736) 00:15:35.884 fused_ordering(737) 00:15:35.884 fused_ordering(738) 00:15:35.884 fused_ordering(739) 00:15:35.884 fused_ordering(740) 00:15:35.884 fused_ordering(741) 00:15:35.884 fused_ordering(742) 00:15:35.884 fused_ordering(743) 00:15:35.884 fused_ordering(744) 00:15:35.884 fused_ordering(745) 00:15:35.884 fused_ordering(746) 00:15:35.884 fused_ordering(747) 00:15:35.884 fused_ordering(748) 00:15:35.884 fused_ordering(749) 00:15:35.884 fused_ordering(750) 00:15:35.884 fused_ordering(751) 00:15:35.884 fused_ordering(752) 00:15:35.884 fused_ordering(753) 00:15:35.884 fused_ordering(754) 00:15:35.884 fused_ordering(755) 00:15:35.884 fused_ordering(756) 00:15:35.884 fused_ordering(757) 00:15:35.884 fused_ordering(758) 00:15:35.884 fused_ordering(759) 00:15:35.884 fused_ordering(760) 00:15:35.884 fused_ordering(761) 00:15:35.884 fused_ordering(762) 00:15:35.884 fused_ordering(763) 00:15:35.884 fused_ordering(764) 00:15:35.884 fused_ordering(765) 00:15:35.884 fused_ordering(766) 00:15:35.884 fused_ordering(767) 00:15:35.884 fused_ordering(768) 00:15:35.884 fused_ordering(769) 00:15:35.884 fused_ordering(770) 00:15:35.884 fused_ordering(771) 00:15:35.884 fused_ordering(772) 00:15:35.884 fused_ordering(773) 00:15:35.884 fused_ordering(774) 00:15:35.884 fused_ordering(775) 00:15:35.884 fused_ordering(776) 00:15:35.884 fused_ordering(777) 00:15:35.884 fused_ordering(778) 00:15:35.884 fused_ordering(779) 00:15:35.884 fused_ordering(780) 00:15:35.884 fused_ordering(781) 00:15:35.884 fused_ordering(782) 00:15:35.884 fused_ordering(783) 00:15:35.884 fused_ordering(784) 00:15:35.884 fused_ordering(785) 00:15:35.884 fused_ordering(786) 00:15:35.884 fused_ordering(787) 00:15:35.884 fused_ordering(788) 00:15:35.884 fused_ordering(789) 00:15:35.884 fused_ordering(790) 00:15:35.884 fused_ordering(791) 00:15:35.884 fused_ordering(792) 00:15:35.884 fused_ordering(793) 00:15:35.884 fused_ordering(794) 00:15:35.884 fused_ordering(795) 00:15:35.884 fused_ordering(796) 00:15:35.884 fused_ordering(797) 00:15:35.884 fused_ordering(798) 00:15:35.884 fused_ordering(799) 00:15:35.884 fused_ordering(800) 00:15:35.884 fused_ordering(801) 00:15:35.884 fused_ordering(802) 00:15:35.884 fused_ordering(803) 00:15:35.884 fused_ordering(804) 00:15:35.884 fused_ordering(805) 00:15:35.884 fused_ordering(806) 00:15:35.884 fused_ordering(807) 00:15:35.884 fused_ordering(808) 00:15:35.884 fused_ordering(809) 00:15:35.884 fused_ordering(810) 00:15:35.884 fused_ordering(811) 00:15:35.884 fused_ordering(812) 00:15:35.884 fused_ordering(813) 00:15:35.885 fused_ordering(814) 00:15:35.885 fused_ordering(815) 00:15:35.885 fused_ordering(816) 00:15:35.885 fused_ordering(817) 00:15:35.885 fused_ordering(818) 00:15:35.885 fused_ordering(819) 00:15:35.885 fused_ordering(820) 00:15:36.462 fused_ordering(821) 00:15:36.462 fused_ordering(822) 00:15:36.462 fused_ordering(823) 00:15:36.462 fused_ordering(824) 00:15:36.462 fused_ordering(825) 00:15:36.462 fused_ordering(826) 00:15:36.462 fused_ordering(827) 00:15:36.462 fused_ordering(828) 00:15:36.462 fused_ordering(829) 00:15:36.462 fused_ordering(830) 00:15:36.462 fused_ordering(831) 00:15:36.462 fused_ordering(832) 00:15:36.463 fused_ordering(833) 00:15:36.463 fused_ordering(834) 00:15:36.463 fused_ordering(835) 00:15:36.463 fused_ordering(836) 00:15:36.463 fused_ordering(837) 00:15:36.463 fused_ordering(838) 00:15:36.463 fused_ordering(839) 00:15:36.463 fused_ordering(840) 00:15:36.463 fused_ordering(841) 00:15:36.463 fused_ordering(842) 00:15:36.463 fused_ordering(843) 00:15:36.463 fused_ordering(844) 00:15:36.463 fused_ordering(845) 00:15:36.463 fused_ordering(846) 00:15:36.463 fused_ordering(847) 00:15:36.463 fused_ordering(848) 00:15:36.463 fused_ordering(849) 00:15:36.463 fused_ordering(850) 00:15:36.463 fused_ordering(851) 00:15:36.463 fused_ordering(852) 00:15:36.463 fused_ordering(853) 00:15:36.463 fused_ordering(854) 00:15:36.463 fused_ordering(855) 00:15:36.463 fused_ordering(856) 00:15:36.463 fused_ordering(857) 00:15:36.463 fused_ordering(858) 00:15:36.463 fused_ordering(859) 00:15:36.463 fused_ordering(860) 00:15:36.463 fused_ordering(861) 00:15:36.463 fused_ordering(862) 00:15:36.463 fused_ordering(863) 00:15:36.463 fused_ordering(864) 00:15:36.463 fused_ordering(865) 00:15:36.463 fused_ordering(866) 00:15:36.463 fused_ordering(867) 00:15:36.463 fused_ordering(868) 00:15:36.463 fused_ordering(869) 00:15:36.463 fused_ordering(870) 00:15:36.463 fused_ordering(871) 00:15:36.463 fused_ordering(872) 00:15:36.463 fused_ordering(873) 00:15:36.463 fused_ordering(874) 00:15:36.463 fused_ordering(875) 00:15:36.463 fused_ordering(876) 00:15:36.463 fused_ordering(877) 00:15:36.463 fused_ordering(878) 00:15:36.463 fused_ordering(879) 00:15:36.463 fused_ordering(880) 00:15:36.463 fused_ordering(881) 00:15:36.463 fused_ordering(882) 00:15:36.463 fused_ordering(883) 00:15:36.463 fused_ordering(884) 00:15:36.463 fused_ordering(885) 00:15:36.463 fused_ordering(886) 00:15:36.463 fused_ordering(887) 00:15:36.464 fused_ordering(888) 00:15:36.464 fused_ordering(889) 00:15:36.464 fused_ordering(890) 00:15:36.464 fused_ordering(891) 00:15:36.464 fused_ordering(892) 00:15:36.464 fused_ordering(893) 00:15:36.464 fused_ordering(894) 00:15:36.464 fused_ordering(895) 00:15:36.464 fused_ordering(896) 00:15:36.464 fused_ordering(897) 00:15:36.464 fused_ordering(898) 00:15:36.464 fused_ordering(899) 00:15:36.464 fused_ordering(900) 00:15:36.464 fused_ordering(901) 00:15:36.464 fused_ordering(902) 00:15:36.464 fused_ordering(903) 00:15:36.464 fused_ordering(904) 00:15:36.464 fused_ordering(905) 00:15:36.464 fused_ordering(906) 00:15:36.464 fused_ordering(907) 00:15:36.464 fused_ordering(908) 00:15:36.464 fused_ordering(909) 00:15:36.464 fused_ordering(910) 00:15:36.464 fused_ordering(911) 00:15:36.464 fused_ordering(912) 00:15:36.464 fused_ordering(913) 00:15:36.464 fused_ordering(914) 00:15:36.464 fused_ordering(915) 00:15:36.464 fused_ordering(916) 00:15:36.464 fused_ordering(917) 00:15:36.464 fused_ordering(918) 00:15:36.464 fused_ordering(919) 00:15:36.464 fused_ordering(920) 00:15:36.464 fused_ordering(921) 00:15:36.464 fused_ordering(922) 00:15:36.464 fused_ordering(923) 00:15:36.464 fused_ordering(924) 00:15:36.464 fused_ordering(925) 00:15:36.464 fused_ordering(926) 00:15:36.464 fused_ordering(927) 00:15:36.464 fused_ordering(928) 00:15:36.464 fused_ordering(929) 00:15:36.464 fused_ordering(930) 00:15:36.464 fused_ordering(931) 00:15:36.464 fused_ordering(932) 00:15:36.464 fused_ordering(933) 00:15:36.464 fused_ordering(934) 00:15:36.464 fused_ordering(935) 00:15:36.464 fused_ordering(936) 00:15:36.464 fused_ordering(937) 00:15:36.464 fused_ordering(938) 00:15:36.464 fused_ordering(939) 00:15:36.464 fused_ordering(940) 00:15:36.464 fused_ordering(941) 00:15:36.464 fused_ordering(942) 00:15:36.464 fused_ordering(943) 00:15:36.464 fused_ordering(944) 00:15:36.464 fused_ordering(945) 00:15:36.464 fused_ordering(946) 00:15:36.464 fused_ordering(947) 00:15:36.464 fused_ordering(948) 00:15:36.464 fused_ordering(949) 00:15:36.464 fused_ordering(950) 00:15:36.464 fused_ordering(951) 00:15:36.464 fused_ordering(952) 00:15:36.464 fused_ordering(953) 00:15:36.464 fused_ordering(954) 00:15:36.464 fused_ordering(955) 00:15:36.464 fused_ordering(956) 00:15:36.464 fused_ordering(957) 00:15:36.464 fused_ordering(958) 00:15:36.464 fused_ordering(959) 00:15:36.464 fused_ordering(960) 00:15:36.464 fused_ordering(961) 00:15:36.464 fused_ordering(962) 00:15:36.464 fused_ordering(963) 00:15:36.464 fused_ordering(964) 00:15:36.464 fused_ordering(965) 00:15:36.464 fused_ordering(966) 00:15:36.464 fused_ordering(967) 00:15:36.464 fused_ordering(968) 00:15:36.464 fused_ordering(969) 00:15:36.464 fused_ordering(970) 00:15:36.464 fused_ordering(971) 00:15:36.464 fused_ordering(972) 00:15:36.464 fused_ordering(973) 00:15:36.464 fused_ordering(974) 00:15:36.464 fused_ordering(975) 00:15:36.464 fused_ordering(976) 00:15:36.464 fused_ordering(977) 00:15:36.464 fused_ordering(978) 00:15:36.464 fused_ordering(979) 00:15:36.464 fused_ordering(980) 00:15:36.464 fused_ordering(981) 00:15:36.464 fused_ordering(982) 00:15:36.464 fused_ordering(983) 00:15:36.464 fused_ordering(984) 00:15:36.464 fused_ordering(985) 00:15:36.464 fused_ordering(986) 00:15:36.464 fused_ordering(987) 00:15:36.464 fused_ordering(988) 00:15:36.464 fused_ordering(989) 00:15:36.464 fused_ordering(990) 00:15:36.464 fused_ordering(991) 00:15:36.464 fused_ordering(992) 00:15:36.464 fused_ordering(993) 00:15:36.464 fused_ordering(994) 00:15:36.464 fused_ordering(995) 00:15:36.464 fused_ordering(996) 00:15:36.464 fused_ordering(997) 00:15:36.464 fused_ordering(998) 00:15:36.464 fused_ordering(999) 00:15:36.464 fused_ordering(1000) 00:15:36.464 fused_ordering(1001) 00:15:36.464 fused_ordering(1002) 00:15:36.464 fused_ordering(1003) 00:15:36.464 fused_ordering(1004) 00:15:36.464 fused_ordering(1005) 00:15:36.464 fused_ordering(1006) 00:15:36.464 fused_ordering(1007) 00:15:36.464 fused_ordering(1008) 00:15:36.464 fused_ordering(1009) 00:15:36.464 fused_ordering(1010) 00:15:36.464 fused_ordering(1011) 00:15:36.464 fused_ordering(1012) 00:15:36.464 fused_ordering(1013) 00:15:36.464 fused_ordering(1014) 00:15:36.464 fused_ordering(1015) 00:15:36.464 fused_ordering(1016) 00:15:36.464 fused_ordering(1017) 00:15:36.464 fused_ordering(1018) 00:15:36.464 fused_ordering(1019) 00:15:36.464 fused_ordering(1020) 00:15:36.464 fused_ordering(1021) 00:15:36.464 fused_ordering(1022) 00:15:36.464 fused_ordering(1023) 00:15:36.464 15:00:55 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:36.464 15:00:55 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:36.464 15:00:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:36.464 15:00:55 -- nvmf/common.sh@116 -- # sync 00:15:36.464 15:00:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:36.464 15:00:55 -- nvmf/common.sh@119 -- # set +e 00:15:36.464 15:00:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:36.464 15:00:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:36.464 rmmod nvme_tcp 00:15:36.464 rmmod nvme_fabrics 00:15:36.464 rmmod nvme_keyring 00:15:36.464 15:00:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:36.464 15:00:55 -- nvmf/common.sh@123 -- # set -e 00:15:36.464 15:00:55 -- nvmf/common.sh@124 -- # return 0 00:15:36.464 15:00:55 -- nvmf/common.sh@477 -- # '[' -n 3221271 ']' 00:15:36.464 15:00:55 -- nvmf/common.sh@478 -- # killprocess 3221271 00:15:36.464 15:00:55 -- common/autotest_common.sh@926 -- # '[' -z 3221271 ']' 00:15:36.464 15:00:55 -- common/autotest_common.sh@930 -- # kill -0 3221271 00:15:36.464 15:00:55 -- common/autotest_common.sh@931 -- # uname 00:15:36.723 15:00:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:36.723 15:00:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3221271 00:15:36.723 15:00:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:36.723 15:00:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:36.723 15:00:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3221271' 00:15:36.723 killing process with pid 3221271 00:15:36.723 15:00:55 -- common/autotest_common.sh@945 -- # kill 3221271 00:15:36.723 15:00:55 -- common/autotest_common.sh@950 -- # wait 3221271 00:15:36.981 15:00:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:36.981 15:00:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:36.981 15:00:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:36.981 15:00:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.981 15:00:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:36.981 15:00:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.981 15:00:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.981 15:00:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.901 15:00:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:38.901 00:15:38.901 real 0m13.056s 00:15:38.901 user 0m7.933s 00:15:38.901 sys 0m6.921s 00:15:38.901 15:00:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.901 15:00:57 -- common/autotest_common.sh@10 -- # set +x 00:15:38.901 ************************************ 00:15:38.901 END TEST nvmf_fused_ordering 00:15:38.901 ************************************ 00:15:38.901 15:00:57 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:38.901 15:00:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:38.901 15:00:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:38.901 15:00:57 -- common/autotest_common.sh@10 -- # set +x 00:15:38.901 ************************************ 00:15:38.901 START TEST nvmf_delete_subsystem 00:15:38.901 ************************************ 00:15:38.901 15:00:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:39.159 * Looking for test storage... 00:15:39.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.159 15:00:57 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.159 15:00:57 -- nvmf/common.sh@7 -- # uname -s 00:15:39.159 15:00:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.159 15:00:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.159 15:00:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.159 15:00:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.159 15:00:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.159 15:00:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.159 15:00:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.159 15:00:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.159 15:00:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.159 15:00:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.159 15:00:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:39.159 15:00:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:39.159 15:00:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.159 15:00:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.159 15:00:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.159 15:00:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.159 15:00:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.159 15:00:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.159 15:00:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.159 15:00:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.159 15:00:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.159 15:00:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.159 15:00:57 -- paths/export.sh@5 -- # export PATH 00:15:39.159 15:00:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.159 15:00:57 -- nvmf/common.sh@46 -- # : 0 00:15:39.159 15:00:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:39.159 15:00:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:39.159 15:00:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:39.159 15:00:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.159 15:00:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.159 15:00:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:39.159 15:00:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:39.159 15:00:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:39.159 15:00:57 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:39.159 15:00:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:39.159 15:00:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.159 15:00:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:39.159 15:00:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:39.159 15:00:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:39.159 15:00:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.159 15:00:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.159 15:00:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.159 15:00:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:39.159 15:00:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:39.159 15:00:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:39.159 15:00:57 -- common/autotest_common.sh@10 -- # set +x 00:15:45.727 15:01:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:45.727 15:01:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:45.727 15:01:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:45.727 15:01:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:45.727 15:01:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:45.727 15:01:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:45.727 15:01:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:45.727 15:01:03 -- nvmf/common.sh@294 -- # net_devs=() 00:15:45.727 15:01:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:45.727 15:01:03 -- nvmf/common.sh@295 -- # e810=() 00:15:45.727 15:01:03 -- nvmf/common.sh@295 -- # local -ga e810 00:15:45.727 15:01:03 -- nvmf/common.sh@296 -- # x722=() 00:15:45.727 15:01:03 -- nvmf/common.sh@296 -- # local -ga x722 00:15:45.727 15:01:03 -- nvmf/common.sh@297 -- # mlx=() 00:15:45.727 15:01:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:45.727 15:01:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:45.727 15:01:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:45.727 15:01:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:45.727 15:01:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:45.728 15:01:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:45.728 15:01:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:45.728 15:01:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:45.728 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:45.728 15:01:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:45.728 15:01:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:45.728 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:45.728 15:01:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:45.728 15:01:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:45.728 15:01:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.728 15:01:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:45.728 15:01:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.728 15:01:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:45.728 Found net devices under 0000:af:00.0: cvl_0_0 00:15:45.728 15:01:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.728 15:01:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:45.728 15:01:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.728 15:01:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:45.728 15:01:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.728 15:01:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:45.728 Found net devices under 0000:af:00.1: cvl_0_1 00:15:45.728 15:01:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.728 15:01:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:45.728 15:01:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:45.728 15:01:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:45.728 15:01:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:45.728 15:01:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.728 15:01:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.728 15:01:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:45.728 15:01:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:45.728 15:01:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:45.728 15:01:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:45.728 15:01:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:45.728 15:01:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:45.728 15:01:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.728 15:01:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:45.728 15:01:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:45.728 15:01:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:45.728 15:01:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:45.728 15:01:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:45.728 15:01:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:45.728 15:01:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:45.728 15:01:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:45.728 15:01:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:45.728 15:01:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:45.728 15:01:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:45.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:15:45.728 00:15:45.728 --- 10.0.0.2 ping statistics --- 00:15:45.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.728 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:15:45.728 15:01:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:45.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:15:45.728 00:15:45.728 --- 10.0.0.1 ping statistics --- 00:15:45.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.728 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:15:45.728 15:01:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.728 15:01:04 -- nvmf/common.sh@410 -- # return 0 00:15:45.728 15:01:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:45.728 15:01:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.728 15:01:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:45.728 15:01:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:45.728 15:01:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.728 15:01:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:45.728 15:01:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:45.728 15:01:04 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:45.728 15:01:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:45.728 15:01:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:45.728 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:15:45.728 15:01:04 -- nvmf/common.sh@469 -- # nvmfpid=3226119 00:15:45.728 15:01:04 -- nvmf/common.sh@470 -- # waitforlisten 3226119 00:15:45.728 15:01:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:45.728 15:01:04 -- common/autotest_common.sh@819 -- # '[' -z 3226119 ']' 00:15:45.728 15:01:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.728 15:01:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:45.728 15:01:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.728 15:01:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:45.728 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:15:45.728 [2024-06-11 15:01:04.346977] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:45.728 [2024-06-11 15:01:04.347047] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.728 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.728 [2024-06-11 15:01:04.443039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:45.728 [2024-06-11 15:01:04.526708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:45.728 [2024-06-11 15:01:04.526859] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.728 [2024-06-11 15:01:04.526870] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.728 [2024-06-11 15:01:04.526879] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.728 [2024-06-11 15:01:04.526984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.728 [2024-06-11 15:01:04.526989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.663 15:01:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:46.663 15:01:05 -- common/autotest_common.sh@852 -- # return 0 00:15:46.663 15:01:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:46.663 15:01:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:46.663 15:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.663 15:01:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.663 15:01:05 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:46.663 15:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.663 15:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.663 [2024-06-11 15:01:05.306430] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.663 15:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.663 15:01:05 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:46.663 15:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.663 15:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.663 15:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.663 15:01:05 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.663 15:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.663 15:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.663 [2024-06-11 15:01:05.326622] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.663 15:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.663 15:01:05 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:46.663 15:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.663 15:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.663 NULL1 00:15:46.663 15:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.663 15:01:05 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:46.663 15:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.663 15:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.663 Delay0 00:15:46.663 15:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.663 15:01:05 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.663 15:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.663 15:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.663 15:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.663 15:01:05 -- target/delete_subsystem.sh@28 -- # perf_pid=3226408 00:15:46.663 15:01:05 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:46.663 15:01:05 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:46.663 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.663 [2024-06-11 15:01:05.407441] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:48.564 15:01:07 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:48.564 15:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:48.564 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:15:48.823 Read completed with error (sct=0, sc=8) 00:15:48.823 starting I/O failed: -6 00:15:48.823 Read completed with error (sct=0, sc=8) 00:15:48.823 Read completed with error (sct=0, sc=8) 00:15:48.823 Write completed with error (sct=0, sc=8) 00:15:48.823 Read completed with error (sct=0, sc=8) 00:15:48.823 starting I/O failed: -6 00:15:48.823 Write completed with error (sct=0, sc=8) 00:15:48.823 Read completed with error (sct=0, sc=8) 00:15:48.823 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 [2024-06-11 15:01:07.484280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd040 is same with the state(5) to be set 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 starting I/O failed: -6 00:15:48.824 [2024-06-11 15:01:07.491494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff91800c350 is same with the state(5) to be set 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.824 Write completed with error (sct=0, sc=8) 00:15:48.824 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Write completed with error (sct=0, sc=8) 00:15:48.825 Write completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Write completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Write completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Write completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Write completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Write completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:48.825 Write completed with error (sct=0, sc=8) 00:15:48.825 Write completed with error (sct=0, sc=8) 00:15:48.825 Write completed with error (sct=0, sc=8) 00:15:48.825 Read completed with error (sct=0, sc=8) 00:15:49.760 [2024-06-11 15:01:08.461336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16de5e0 is same with the state(5) to be set 00:15:49.760 Write completed with error (sct=0, sc=8) 00:15:49.760 Read completed with error (sct=0, sc=8) 00:15:49.760 Write completed with error (sct=0, sc=8) 00:15:49.760 Read completed with error (sct=0, sc=8) 00:15:49.760 Write completed with error (sct=0, sc=8) 00:15:49.760 Read completed with error (sct=0, sc=8) 00:15:49.760 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 [2024-06-11 15:01:08.487681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d3910 is same with the state(5) to be set 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 [2024-06-11 15:01:08.488119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd8b0 is same with the state(5) to be set 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 [2024-06-11 15:01:08.492730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff91800bf20 is same with the state(5) to be set 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Write completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 Read completed with error (sct=0, sc=8) 00:15:49.761 [2024-06-11 15:01:08.492881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff91800c600 is same with the state(5) to be set 00:15:49.761 [2024-06-11 15:01:08.493418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16de5e0 (9): Bad file descriptor 00:15:49.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:49.761 15:01:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.761 15:01:08 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:49.761 15:01:08 -- target/delete_subsystem.sh@35 -- # kill -0 3226408 00:15:49.761 15:01:08 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:49.761 Initializing NVMe Controllers 00:15:49.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:49.761 Controller IO queue size 128, less than required. 00:15:49.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:49.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:49.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:49.761 Initialization complete. Launching workers. 00:15:49.761 ======================================================== 00:15:49.761 Latency(us) 00:15:49.761 Device Information : IOPS MiB/s Average min max 00:15:49.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.68 0.08 899955.76 251.50 1008474.25 00:15:49.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.72 0.08 924324.88 318.67 1013713.34 00:15:49.761 ======================================================== 00:15:49.761 Total : 325.40 0.16 911767.70 251.50 1013713.34 00:15:49.761 00:15:50.376 15:01:08 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:50.376 15:01:08 -- target/delete_subsystem.sh@35 -- # kill -0 3226408 00:15:50.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3226408) - No such process 00:15:50.376 15:01:08 -- target/delete_subsystem.sh@45 -- # NOT wait 3226408 00:15:50.376 15:01:08 -- common/autotest_common.sh@640 -- # local es=0 00:15:50.376 15:01:08 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3226408 00:15:50.376 15:01:08 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:50.376 15:01:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:50.376 15:01:09 -- common/autotest_common.sh@632 -- # type -t wait 00:15:50.376 15:01:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:50.376 15:01:09 -- common/autotest_common.sh@643 -- # wait 3226408 00:15:50.376 15:01:09 -- common/autotest_common.sh@643 -- # es=1 00:15:50.376 15:01:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:50.376 15:01:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:50.376 15:01:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:50.376 15:01:09 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:50.376 15:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.376 15:01:09 -- common/autotest_common.sh@10 -- # set +x 00:15:50.376 15:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.376 15:01:09 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.376 15:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.376 15:01:09 -- common/autotest_common.sh@10 -- # set +x 00:15:50.376 [2024-06-11 15:01:09.019682] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.376 15:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.376 15:01:09 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.376 15:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.376 15:01:09 -- common/autotest_common.sh@10 -- # set +x 00:15:50.376 15:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.376 15:01:09 -- target/delete_subsystem.sh@54 -- # perf_pid=3226968 00:15:50.376 15:01:09 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:50.376 15:01:09 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:50.376 15:01:09 -- target/delete_subsystem.sh@57 -- # kill -0 3226968 00:15:50.376 15:01:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:50.376 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.376 [2024-06-11 15:01:09.081198] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:50.956 15:01:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:50.956 15:01:09 -- target/delete_subsystem.sh@57 -- # kill -0 3226968 00:15:50.956 15:01:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:51.215 15:01:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:51.215 15:01:10 -- target/delete_subsystem.sh@57 -- # kill -0 3226968 00:15:51.215 15:01:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:51.783 15:01:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:51.783 15:01:10 -- target/delete_subsystem.sh@57 -- # kill -0 3226968 00:15:51.783 15:01:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:52.352 15:01:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:52.352 15:01:11 -- target/delete_subsystem.sh@57 -- # kill -0 3226968 00:15:52.352 15:01:11 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:52.920 15:01:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:52.920 15:01:11 -- target/delete_subsystem.sh@57 -- # kill -0 3226968 00:15:52.920 15:01:11 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:53.486 15:01:12 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:53.486 15:01:12 -- target/delete_subsystem.sh@57 -- # kill -0 3226968 00:15:53.486 15:01:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:53.486 Initializing NVMe Controllers 00:15:53.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:53.486 Controller IO queue size 128, less than required. 00:15:53.486 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:53.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:53.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:53.486 Initialization complete. Launching workers. 00:15:53.486 ======================================================== 00:15:53.486 Latency(us) 00:15:53.486 Device Information : IOPS MiB/s Average min max 00:15:53.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002621.03 1000251.90 1007892.20 00:15:53.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004870.18 1000337.16 1012011.53 00:15:53.486 ======================================================== 00:15:53.486 Total : 256.00 0.12 1003745.61 1000251.90 1012011.53 00:15:53.486 00:15:53.744 15:01:12 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:53.744 15:01:12 -- target/delete_subsystem.sh@57 -- # kill -0 3226968 00:15:53.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3226968) - No such process 00:15:53.745 15:01:12 -- target/delete_subsystem.sh@67 -- # wait 3226968 00:15:53.745 15:01:12 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:53.745 15:01:12 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:53.745 15:01:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:53.745 15:01:12 -- nvmf/common.sh@116 -- # sync 00:15:53.745 15:01:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:53.745 15:01:12 -- nvmf/common.sh@119 -- # set +e 00:15:53.745 15:01:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:53.745 15:01:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:53.745 rmmod nvme_tcp 00:15:54.003 rmmod nvme_fabrics 00:15:54.003 rmmod nvme_keyring 00:15:54.003 15:01:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:54.003 15:01:12 -- nvmf/common.sh@123 -- # set -e 00:15:54.003 15:01:12 -- nvmf/common.sh@124 -- # return 0 00:15:54.003 15:01:12 -- nvmf/common.sh@477 -- # '[' -n 3226119 ']' 00:15:54.004 15:01:12 -- nvmf/common.sh@478 -- # killprocess 3226119 00:15:54.004 15:01:12 -- common/autotest_common.sh@926 -- # '[' -z 3226119 ']' 00:15:54.004 15:01:12 -- common/autotest_common.sh@930 -- # kill -0 3226119 00:15:54.004 15:01:12 -- common/autotest_common.sh@931 -- # uname 00:15:54.004 15:01:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:54.004 15:01:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3226119 00:15:54.004 15:01:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:54.004 15:01:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:54.004 15:01:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3226119' 00:15:54.004 killing process with pid 3226119 00:15:54.004 15:01:12 -- common/autotest_common.sh@945 -- # kill 3226119 00:15:54.004 15:01:12 -- common/autotest_common.sh@950 -- # wait 3226119 00:15:54.262 15:01:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:54.262 15:01:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:54.262 15:01:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:54.262 15:01:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:54.262 15:01:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:54.262 15:01:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.262 15:01:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.262 15:01:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.165 15:01:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:56.165 00:15:56.165 real 0m17.297s 00:15:56.165 user 0m30.746s 00:15:56.165 sys 0m5.707s 00:15:56.165 15:01:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.165 15:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:56.165 ************************************ 00:15:56.165 END TEST nvmf_delete_subsystem 00:15:56.165 ************************************ 00:15:56.424 15:01:15 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:56.424 15:01:15 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:56.424 15:01:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:56.424 15:01:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:56.424 15:01:15 -- common/autotest_common.sh@10 -- # set +x 00:15:56.424 ************************************ 00:15:56.424 START TEST nvmf_nvme_cli 00:15:56.424 ************************************ 00:15:56.424 15:01:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:56.424 * Looking for test storage... 00:15:56.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.424 15:01:15 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.424 15:01:15 -- nvmf/common.sh@7 -- # uname -s 00:15:56.424 15:01:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.424 15:01:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.424 15:01:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.424 15:01:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.424 15:01:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.424 15:01:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.424 15:01:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.424 15:01:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.424 15:01:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.424 15:01:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.424 15:01:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:56.424 15:01:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:56.424 15:01:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.424 15:01:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.424 15:01:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.424 15:01:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.424 15:01:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.424 15:01:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.424 15:01:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.424 15:01:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.424 15:01:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.424 15:01:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.424 15:01:15 -- paths/export.sh@5 -- # export PATH 00:15:56.424 15:01:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.424 15:01:15 -- nvmf/common.sh@46 -- # : 0 00:15:56.424 15:01:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:56.424 15:01:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:56.424 15:01:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:56.424 15:01:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.425 15:01:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.425 15:01:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:56.425 15:01:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:56.425 15:01:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:56.425 15:01:15 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.425 15:01:15 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.425 15:01:15 -- target/nvme_cli.sh@14 -- # devs=() 00:15:56.425 15:01:15 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:56.425 15:01:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:56.425 15:01:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.425 15:01:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:56.425 15:01:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:56.425 15:01:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:56.425 15:01:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.425 15:01:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.425 15:01:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.425 15:01:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:56.425 15:01:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:56.425 15:01:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:56.425 15:01:15 -- common/autotest_common.sh@10 -- # set +x 00:16:02.990 15:01:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:02.990 15:01:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:02.990 15:01:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:02.990 15:01:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:02.990 15:01:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:02.990 15:01:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:02.990 15:01:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:02.990 15:01:21 -- nvmf/common.sh@294 -- # net_devs=() 00:16:02.990 15:01:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:02.990 15:01:21 -- nvmf/common.sh@295 -- # e810=() 00:16:02.990 15:01:21 -- nvmf/common.sh@295 -- # local -ga e810 00:16:02.990 15:01:21 -- nvmf/common.sh@296 -- # x722=() 00:16:02.990 15:01:21 -- nvmf/common.sh@296 -- # local -ga x722 00:16:02.990 15:01:21 -- nvmf/common.sh@297 -- # mlx=() 00:16:02.990 15:01:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:02.990 15:01:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:02.990 15:01:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:02.990 15:01:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:02.990 15:01:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:02.990 15:01:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:02.990 15:01:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:02.990 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:02.990 15:01:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:02.990 15:01:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:02.990 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:02.990 15:01:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:02.990 15:01:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:02.990 15:01:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:02.990 15:01:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.990 15:01:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:02.991 15:01:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.991 15:01:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:02.991 Found net devices under 0000:af:00.0: cvl_0_0 00:16:02.991 15:01:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.991 15:01:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:02.991 15:01:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.991 15:01:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:02.991 15:01:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.991 15:01:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:02.991 Found net devices under 0000:af:00.1: cvl_0_1 00:16:02.991 15:01:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.991 15:01:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:02.991 15:01:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:02.991 15:01:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:02.991 15:01:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:02.991 15:01:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:02.991 15:01:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.991 15:01:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.991 15:01:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.991 15:01:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:02.991 15:01:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.991 15:01:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.991 15:01:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:02.991 15:01:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.991 15:01:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.991 15:01:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:02.991 15:01:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:02.991 15:01:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.991 15:01:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.991 15:01:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.991 15:01:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.991 15:01:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:02.991 15:01:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.991 15:01:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.991 15:01:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.991 15:01:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:02.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:16:02.991 00:16:02.991 --- 10.0.0.2 ping statistics --- 00:16:02.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.991 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:16:02.991 15:01:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:16:02.991 00:16:02.991 --- 10.0.0.1 ping statistics --- 00:16:02.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.991 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:16:02.991 15:01:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.991 15:01:21 -- nvmf/common.sh@410 -- # return 0 00:16:02.991 15:01:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:02.991 15:01:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.991 15:01:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:02.991 15:01:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:02.991 15:01:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.991 15:01:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:02.991 15:01:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:02.991 15:01:21 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:02.991 15:01:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:02.991 15:01:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:02.991 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:16:02.991 15:01:21 -- nvmf/common.sh@469 -- # nvmfpid=3231781 00:16:02.991 15:01:21 -- nvmf/common.sh@470 -- # waitforlisten 3231781 00:16:02.991 15:01:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:02.991 15:01:21 -- common/autotest_common.sh@819 -- # '[' -z 3231781 ']' 00:16:02.991 15:01:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.991 15:01:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:02.991 15:01:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.991 15:01:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:02.991 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:16:02.991 [2024-06-11 15:01:21.784431] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:02.991 [2024-06-11 15:01:21.784485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.991 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.249 [2024-06-11 15:01:21.886807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.249 [2024-06-11 15:01:21.975238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:03.249 [2024-06-11 15:01:21.975386] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.249 [2024-06-11 15:01:21.975398] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.249 [2024-06-11 15:01:21.975407] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.249 [2024-06-11 15:01:21.975457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.249 [2024-06-11 15:01:21.975555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.249 [2024-06-11 15:01:21.975667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.249 [2024-06-11 15:01:21.975668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.183 15:01:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:04.183 15:01:22 -- common/autotest_common.sh@852 -- # return 0 00:16:04.183 15:01:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:04.183 15:01:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:04.183 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:04.183 15:01:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.183 15:01:22 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.183 15:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.183 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:04.183 [2024-06-11 15:01:22.759921] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.183 15:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.183 15:01:22 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:04.183 15:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.183 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:04.183 Malloc0 00:16:04.183 15:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.183 15:01:22 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:04.183 15:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.183 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:04.183 Malloc1 00:16:04.183 15:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.183 15:01:22 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:04.183 15:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.183 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:04.183 15:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.183 15:01:22 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:04.183 15:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.183 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:04.184 15:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.184 15:01:22 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:04.184 15:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.184 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:04.184 15:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.184 15:01:22 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.184 15:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.184 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:04.184 [2024-06-11 15:01:22.842854] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.184 15:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.184 15:01:22 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:04.184 15:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.184 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:04.184 15:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.184 15:01:22 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:04.184 00:16:04.184 Discovery Log Number of Records 2, Generation counter 2 00:16:04.184 =====Discovery Log Entry 0====== 00:16:04.184 trtype: tcp 00:16:04.184 adrfam: ipv4 00:16:04.184 subtype: current discovery subsystem 00:16:04.184 treq: not required 00:16:04.184 portid: 0 00:16:04.184 trsvcid: 4420 00:16:04.184 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:04.184 traddr: 10.0.0.2 00:16:04.184 eflags: explicit discovery connections, duplicate discovery information 00:16:04.184 sectype: none 00:16:04.184 =====Discovery Log Entry 1====== 00:16:04.184 trtype: tcp 00:16:04.184 adrfam: ipv4 00:16:04.184 subtype: nvme subsystem 00:16:04.184 treq: not required 00:16:04.184 portid: 0 00:16:04.184 trsvcid: 4420 00:16:04.184 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:04.184 traddr: 10.0.0.2 00:16:04.184 eflags: none 00:16:04.184 sectype: none 00:16:04.184 15:01:23 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:04.184 15:01:23 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:04.184 15:01:23 -- nvmf/common.sh@510 -- # local dev _ 00:16:04.184 15:01:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.442 15:01:23 -- nvmf/common.sh@509 -- # nvme list 00:16:04.442 15:01:23 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:04.442 15:01:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.442 15:01:23 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:04.442 15:01:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.442 15:01:23 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:04.442 15:01:23 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.815 15:01:24 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:05.815 15:01:24 -- common/autotest_common.sh@1177 -- # local i=0 00:16:05.815 15:01:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.815 15:01:24 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:16:05.815 15:01:24 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:16:05.815 15:01:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:07.717 15:01:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:07.717 15:01:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:07.717 15:01:26 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.717 15:01:26 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:16:07.717 15:01:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.717 15:01:26 -- common/autotest_common.sh@1187 -- # return 0 00:16:07.717 15:01:26 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:07.717 15:01:26 -- nvmf/common.sh@510 -- # local dev _ 00:16:07.717 15:01:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:07.717 15:01:26 -- nvmf/common.sh@509 -- # nvme list 00:16:07.717 15:01:26 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:07.717 15:01:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:07.717 15:01:26 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:07.717 15:01:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:07.717 15:01:26 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:07.717 15:01:26 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:07.717 15:01:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:07.717 15:01:26 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:07.717 15:01:26 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:07.717 15:01:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:07.717 15:01:26 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:07.717 /dev/nvme0n1 ]] 00:16:07.717 15:01:26 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:07.717 15:01:26 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:07.717 15:01:26 -- nvmf/common.sh@510 -- # local dev _ 00:16:07.717 15:01:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:07.717 15:01:26 -- nvmf/common.sh@509 -- # nvme list 00:16:07.983 15:01:26 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:07.983 15:01:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:07.983 15:01:26 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:07.983 15:01:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:07.983 15:01:26 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:07.983 15:01:26 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:07.983 15:01:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:07.983 15:01:26 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:07.983 15:01:26 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:07.983 15:01:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:07.983 15:01:26 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:07.983 15:01:26 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.243 15:01:26 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.243 15:01:26 -- common/autotest_common.sh@1198 -- # local i=0 00:16:08.243 15:01:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:08.243 15:01:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.243 15:01:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:08.243 15:01:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.243 15:01:26 -- common/autotest_common.sh@1210 -- # return 0 00:16:08.243 15:01:26 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:08.243 15:01:26 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.243 15:01:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.243 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:16:08.243 15:01:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.243 15:01:26 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:08.243 15:01:26 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:08.243 15:01:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:08.243 15:01:26 -- nvmf/common.sh@116 -- # sync 00:16:08.243 15:01:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:08.243 15:01:26 -- nvmf/common.sh@119 -- # set +e 00:16:08.243 15:01:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:08.243 15:01:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:08.243 rmmod nvme_tcp 00:16:08.243 rmmod nvme_fabrics 00:16:08.243 rmmod nvme_keyring 00:16:08.243 15:01:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:08.243 15:01:27 -- nvmf/common.sh@123 -- # set -e 00:16:08.243 15:01:27 -- nvmf/common.sh@124 -- # return 0 00:16:08.243 15:01:27 -- nvmf/common.sh@477 -- # '[' -n 3231781 ']' 00:16:08.243 15:01:27 -- nvmf/common.sh@478 -- # killprocess 3231781 00:16:08.243 15:01:27 -- common/autotest_common.sh@926 -- # '[' -z 3231781 ']' 00:16:08.243 15:01:27 -- common/autotest_common.sh@930 -- # kill -0 3231781 00:16:08.243 15:01:27 -- common/autotest_common.sh@931 -- # uname 00:16:08.243 15:01:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:08.243 15:01:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3231781 00:16:08.243 15:01:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:08.243 15:01:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:08.243 15:01:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3231781' 00:16:08.243 killing process with pid 3231781 00:16:08.243 15:01:27 -- common/autotest_common.sh@945 -- # kill 3231781 00:16:08.243 15:01:27 -- common/autotest_common.sh@950 -- # wait 3231781 00:16:08.502 15:01:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:08.502 15:01:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:08.502 15:01:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:08.502 15:01:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.502 15:01:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:08.502 15:01:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.502 15:01:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.502 15:01:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.035 15:01:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:11.035 00:16:11.035 real 0m14.383s 00:16:11.035 user 0m23.184s 00:16:11.035 sys 0m5.634s 00:16:11.035 15:01:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.035 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:11.035 ************************************ 00:16:11.035 END TEST nvmf_nvme_cli 00:16:11.035 ************************************ 00:16:11.035 15:01:29 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:16:11.035 15:01:29 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:11.035 15:01:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:11.035 15:01:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:11.035 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:11.035 ************************************ 00:16:11.035 START TEST nvmf_host_management 00:16:11.035 ************************************ 00:16:11.035 15:01:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:11.035 * Looking for test storage... 00:16:11.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.035 15:01:29 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.035 15:01:29 -- nvmf/common.sh@7 -- # uname -s 00:16:11.035 15:01:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.035 15:01:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.035 15:01:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.035 15:01:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.035 15:01:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.035 15:01:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.035 15:01:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.035 15:01:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.035 15:01:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.035 15:01:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.035 15:01:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:11.035 15:01:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:11.035 15:01:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.035 15:01:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.035 15:01:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.035 15:01:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.035 15:01:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.035 15:01:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.035 15:01:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.035 15:01:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.035 15:01:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.035 15:01:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.035 15:01:29 -- paths/export.sh@5 -- # export PATH 00:16:11.035 15:01:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.035 15:01:29 -- nvmf/common.sh@46 -- # : 0 00:16:11.035 15:01:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:11.035 15:01:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:11.035 15:01:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:11.035 15:01:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.035 15:01:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.035 15:01:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:11.035 15:01:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:11.035 15:01:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:11.035 15:01:29 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.035 15:01:29 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.035 15:01:29 -- target/host_management.sh@104 -- # nvmftestinit 00:16:11.035 15:01:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:11.035 15:01:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.035 15:01:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:11.035 15:01:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:11.035 15:01:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:11.035 15:01:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.036 15:01:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.036 15:01:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.036 15:01:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:11.036 15:01:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:11.036 15:01:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:11.036 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:17.602 15:01:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:17.602 15:01:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:17.602 15:01:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:17.602 15:01:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:17.602 15:01:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:17.602 15:01:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:17.602 15:01:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:17.602 15:01:35 -- nvmf/common.sh@294 -- # net_devs=() 00:16:17.602 15:01:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:17.602 15:01:35 -- nvmf/common.sh@295 -- # e810=() 00:16:17.602 15:01:35 -- nvmf/common.sh@295 -- # local -ga e810 00:16:17.602 15:01:35 -- nvmf/common.sh@296 -- # x722=() 00:16:17.602 15:01:35 -- nvmf/common.sh@296 -- # local -ga x722 00:16:17.602 15:01:35 -- nvmf/common.sh@297 -- # mlx=() 00:16:17.602 15:01:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:17.602 15:01:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.602 15:01:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:17.602 15:01:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:17.602 15:01:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:17.602 15:01:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:17.602 15:01:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:17.602 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:17.602 15:01:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:17.602 15:01:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:17.602 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:17.602 15:01:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:17.602 15:01:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:17.602 15:01:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.602 15:01:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:17.602 15:01:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.602 15:01:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:17.602 Found net devices under 0000:af:00.0: cvl_0_0 00:16:17.602 15:01:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.602 15:01:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:17.602 15:01:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.602 15:01:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:17.602 15:01:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.602 15:01:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:17.602 Found net devices under 0000:af:00.1: cvl_0_1 00:16:17.602 15:01:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.602 15:01:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:17.602 15:01:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:17.602 15:01:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:17.602 15:01:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.602 15:01:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.602 15:01:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.602 15:01:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:17.602 15:01:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.602 15:01:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.602 15:01:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:17.602 15:01:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.602 15:01:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.602 15:01:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:17.602 15:01:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:17.602 15:01:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.602 15:01:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.602 15:01:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.602 15:01:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.602 15:01:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:17.602 15:01:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.602 15:01:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.602 15:01:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.602 15:01:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:17.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:16:17.602 00:16:17.602 --- 10.0.0.2 ping statistics --- 00:16:17.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.602 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:16:17.602 15:01:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:16:17.602 00:16:17.602 --- 10.0.0.1 ping statistics --- 00:16:17.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.602 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:16:17.602 15:01:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.602 15:01:35 -- nvmf/common.sh@410 -- # return 0 00:16:17.602 15:01:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:17.602 15:01:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.602 15:01:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:17.602 15:01:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.602 15:01:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:17.602 15:01:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:17.602 15:01:35 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:17.602 15:01:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:17.602 15:01:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:17.602 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:16:17.602 ************************************ 00:16:17.602 START TEST nvmf_host_management 00:16:17.602 ************************************ 00:16:17.602 15:01:35 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:17.602 15:01:35 -- target/host_management.sh@69 -- # starttarget 00:16:17.602 15:01:35 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:17.602 15:01:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:17.602 15:01:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:17.602 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:16:17.602 15:01:35 -- nvmf/common.sh@469 -- # nvmfpid=3236777 00:16:17.602 15:01:35 -- nvmf/common.sh@470 -- # waitforlisten 3236777 00:16:17.602 15:01:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:17.602 15:01:35 -- common/autotest_common.sh@819 -- # '[' -z 3236777 ']' 00:16:17.602 15:01:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.602 15:01:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.602 15:01:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.602 15:01:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.603 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:16:17.603 [2024-06-11 15:01:35.934851] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:17.603 [2024-06-11 15:01:35.934905] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.603 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.603 [2024-06-11 15:01:36.022874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.603 [2024-06-11 15:01:36.110085] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.603 [2024-06-11 15:01:36.110228] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.603 [2024-06-11 15:01:36.110239] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.603 [2024-06-11 15:01:36.110248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.603 [2024-06-11 15:01:36.110357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.603 [2024-06-11 15:01:36.110471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.603 [2024-06-11 15:01:36.110586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:17.603 [2024-06-11 15:01:36.110587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.172 15:01:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.172 15:01:36 -- common/autotest_common.sh@852 -- # return 0 00:16:18.172 15:01:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:18.172 15:01:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:18.172 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:18.172 15:01:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.172 15:01:36 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.172 15:01:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.172 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:18.172 [2024-06-11 15:01:36.918774] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.172 15:01:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.172 15:01:36 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:18.172 15:01:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:18.172 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:18.172 15:01:36 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:18.172 15:01:36 -- target/host_management.sh@23 -- # cat 00:16:18.172 15:01:36 -- target/host_management.sh@30 -- # rpc_cmd 00:16:18.172 15:01:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.172 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:18.172 Malloc0 00:16:18.172 [2024-06-11 15:01:36.982738] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.172 15:01:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.172 15:01:36 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:18.172 15:01:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:18.172 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:18.431 15:01:37 -- target/host_management.sh@73 -- # perfpid=3236968 00:16:18.431 15:01:37 -- target/host_management.sh@74 -- # waitforlisten 3236968 /var/tmp/bdevperf.sock 00:16:18.431 15:01:37 -- common/autotest_common.sh@819 -- # '[' -z 3236968 ']' 00:16:18.431 15:01:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:18.431 15:01:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:18.431 15:01:37 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:18.431 15:01:37 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:18.431 15:01:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:18.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:18.431 15:01:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:18.431 15:01:37 -- nvmf/common.sh@520 -- # config=() 00:16:18.431 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:16:18.431 15:01:37 -- nvmf/common.sh@520 -- # local subsystem config 00:16:18.431 15:01:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:18.431 15:01:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:18.431 { 00:16:18.431 "params": { 00:16:18.431 "name": "Nvme$subsystem", 00:16:18.431 "trtype": "$TEST_TRANSPORT", 00:16:18.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.431 "adrfam": "ipv4", 00:16:18.431 "trsvcid": "$NVMF_PORT", 00:16:18.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.432 "hdgst": ${hdgst:-false}, 00:16:18.432 "ddgst": ${ddgst:-false} 00:16:18.432 }, 00:16:18.432 "method": "bdev_nvme_attach_controller" 00:16:18.432 } 00:16:18.432 EOF 00:16:18.432 )") 00:16:18.432 15:01:37 -- nvmf/common.sh@542 -- # cat 00:16:18.432 15:01:37 -- nvmf/common.sh@544 -- # jq . 00:16:18.432 15:01:37 -- nvmf/common.sh@545 -- # IFS=, 00:16:18.432 15:01:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:18.432 "params": { 00:16:18.432 "name": "Nvme0", 00:16:18.432 "trtype": "tcp", 00:16:18.432 "traddr": "10.0.0.2", 00:16:18.432 "adrfam": "ipv4", 00:16:18.432 "trsvcid": "4420", 00:16:18.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:18.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:18.432 "hdgst": false, 00:16:18.432 "ddgst": false 00:16:18.432 }, 00:16:18.432 "method": "bdev_nvme_attach_controller" 00:16:18.432 }' 00:16:18.432 [2024-06-11 15:01:37.076955] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:18.432 [2024-06-11 15:01:37.077013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3236968 ] 00:16:18.432 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.432 [2024-06-11 15:01:37.168784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.432 [2024-06-11 15:01:37.253200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.691 Running I/O for 10 seconds... 00:16:19.262 15:01:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:19.262 15:01:37 -- common/autotest_common.sh@852 -- # return 0 00:16:19.262 15:01:37 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:19.262 15:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.262 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:16:19.262 15:01:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.262 15:01:37 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:19.262 15:01:37 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:19.262 15:01:37 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:19.262 15:01:37 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:19.262 15:01:37 -- target/host_management.sh@52 -- # local ret=1 00:16:19.262 15:01:37 -- target/host_management.sh@53 -- # local i 00:16:19.262 15:01:37 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:19.262 15:01:37 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:19.262 15:01:37 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:19.262 15:01:37 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:19.262 15:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.262 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:16:19.262 15:01:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.262 15:01:38 -- target/host_management.sh@55 -- # read_io_count=949 00:16:19.262 15:01:38 -- target/host_management.sh@58 -- # '[' 949 -ge 100 ']' 00:16:19.262 15:01:38 -- target/host_management.sh@59 -- # ret=0 00:16:19.262 15:01:38 -- target/host_management.sh@60 -- # break 00:16:19.262 15:01:38 -- target/host_management.sh@64 -- # return 0 00:16:19.262 15:01:38 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:19.262 15:01:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.262 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:16:19.262 [2024-06-11 15:01:38.010457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.010997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.011006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.011015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.011030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.262 [2024-06-11 15:01:38.011039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4710 is same with the state(5) to be set 00:16:19.263 [2024-06-11 15:01:38.011491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.011985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.011995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.263 [2024-06-11 15:01:38.012416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.263 [2024-06-11 15:01:38.012426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.264 [2024-06-11 15:01:38.012966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.264 [2024-06-11 15:01:38.012978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f96b0 is same with the state(5) to be set 00:16:19.264 [2024-06-11 15:01:38.013041] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15f96b0 was disconnected and freed. reset controller. 00:16:19.264 [2024-06-11 15:01:38.014393] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:19.264 15:01:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.264 15:01:38 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:19.264 15:01:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.264 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:16:19.264 task offset: 4096 on job bdev=Nvme0n1 fails 00:16:19.264 00:16:19.264 Latency(us) 00:16:19.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.264 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:19.264 Job: Nvme0n1 ended in about 0.52 seconds with error 00:16:19.264 Verification LBA range: start 0x0 length 0x400 00:16:19.264 Nvme0n1 : 0.52 1967.36 122.96 122.24 0.00 30120.53 8817.57 36223.53 00:16:19.264 =================================================================================================================== 00:16:19.264 Total : 1967.36 122.96 122.24 0.00 30120.53 8817.57 36223.53 00:16:19.264 [2024-06-11 15:01:38.016708] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:19.264 [2024-06-11 15:01:38.016730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15fbe40 (9): Bad file descriptor 00:16:19.264 15:01:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.264 15:01:38 -- target/host_management.sh@87 -- # sleep 1 00:16:19.264 [2024-06-11 15:01:38.029920] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:20.202 15:01:39 -- target/host_management.sh@91 -- # kill -9 3236968 00:16:20.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3236968) - No such process 00:16:20.202 15:01:39 -- target/host_management.sh@91 -- # true 00:16:20.202 15:01:39 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:20.202 15:01:39 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:20.202 15:01:39 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:20.202 15:01:39 -- nvmf/common.sh@520 -- # config=() 00:16:20.202 15:01:39 -- nvmf/common.sh@520 -- # local subsystem config 00:16:20.202 15:01:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:20.202 15:01:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:20.202 { 00:16:20.202 "params": { 00:16:20.202 "name": "Nvme$subsystem", 00:16:20.202 "trtype": "$TEST_TRANSPORT", 00:16:20.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.202 "adrfam": "ipv4", 00:16:20.202 "trsvcid": "$NVMF_PORT", 00:16:20.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.202 "hdgst": ${hdgst:-false}, 00:16:20.202 "ddgst": ${ddgst:-false} 00:16:20.202 }, 00:16:20.202 "method": "bdev_nvme_attach_controller" 00:16:20.202 } 00:16:20.202 EOF 00:16:20.202 )") 00:16:20.202 15:01:39 -- nvmf/common.sh@542 -- # cat 00:16:20.202 15:01:39 -- nvmf/common.sh@544 -- # jq . 00:16:20.202 15:01:39 -- nvmf/common.sh@545 -- # IFS=, 00:16:20.202 15:01:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:20.202 "params": { 00:16:20.202 "name": "Nvme0", 00:16:20.202 "trtype": "tcp", 00:16:20.202 "traddr": "10.0.0.2", 00:16:20.202 "adrfam": "ipv4", 00:16:20.202 "trsvcid": "4420", 00:16:20.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:20.202 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:20.202 "hdgst": false, 00:16:20.202 "ddgst": false 00:16:20.202 }, 00:16:20.202 "method": "bdev_nvme_attach_controller" 00:16:20.202 }' 00:16:20.462 [2024-06-11 15:01:39.076401] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:20.462 [2024-06-11 15:01:39.076463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3237485 ] 00:16:20.462 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.462 [2024-06-11 15:01:39.164634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.462 [2024-06-11 15:01:39.245991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.721 Running I/O for 1 seconds... 00:16:22.099 00:16:22.100 Latency(us) 00:16:22.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.100 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:22.100 Verification LBA range: start 0x0 length 0x400 00:16:22.100 Nvme0n1 : 1.01 2143.89 133.99 0.00 0.00 29418.15 1802.24 38368.35 00:16:22.100 =================================================================================================================== 00:16:22.100 Total : 2143.89 133.99 0.00 0.00 29418.15 1802.24 38368.35 00:16:22.100 15:01:40 -- target/host_management.sh@101 -- # stoptarget 00:16:22.100 15:01:40 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:22.100 15:01:40 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:22.100 15:01:40 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:22.100 15:01:40 -- target/host_management.sh@40 -- # nvmftestfini 00:16:22.100 15:01:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:22.100 15:01:40 -- nvmf/common.sh@116 -- # sync 00:16:22.100 15:01:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:22.100 15:01:40 -- nvmf/common.sh@119 -- # set +e 00:16:22.100 15:01:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:22.100 15:01:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:22.100 rmmod nvme_tcp 00:16:22.100 rmmod nvme_fabrics 00:16:22.100 rmmod nvme_keyring 00:16:22.100 15:01:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:22.100 15:01:40 -- nvmf/common.sh@123 -- # set -e 00:16:22.100 15:01:40 -- nvmf/common.sh@124 -- # return 0 00:16:22.100 15:01:40 -- nvmf/common.sh@477 -- # '[' -n 3236777 ']' 00:16:22.100 15:01:40 -- nvmf/common.sh@478 -- # killprocess 3236777 00:16:22.100 15:01:40 -- common/autotest_common.sh@926 -- # '[' -z 3236777 ']' 00:16:22.100 15:01:40 -- common/autotest_common.sh@930 -- # kill -0 3236777 00:16:22.100 15:01:40 -- common/autotest_common.sh@931 -- # uname 00:16:22.100 15:01:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:22.100 15:01:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3236777 00:16:22.100 15:01:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:22.100 15:01:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:22.100 15:01:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3236777' 00:16:22.100 killing process with pid 3236777 00:16:22.100 15:01:40 -- common/autotest_common.sh@945 -- # kill 3236777 00:16:22.100 15:01:40 -- common/autotest_common.sh@950 -- # wait 3236777 00:16:22.359 [2024-06-11 15:01:41.096487] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:22.359 15:01:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:22.359 15:01:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:22.359 15:01:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:22.359 15:01:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.359 15:01:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:22.359 15:01:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.359 15:01:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.359 15:01:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.894 15:01:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:24.894 00:16:24.894 real 0m7.312s 00:16:24.894 user 0m22.728s 00:16:24.894 sys 0m1.237s 00:16:24.894 15:01:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.894 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:16:24.894 ************************************ 00:16:24.894 END TEST nvmf_host_management 00:16:24.895 ************************************ 00:16:24.895 15:01:43 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:24.895 00:16:24.895 real 0m13.788s 00:16:24.895 user 0m24.433s 00:16:24.895 sys 0m5.901s 00:16:24.895 15:01:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.895 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:16:24.895 ************************************ 00:16:24.895 END TEST nvmf_host_management 00:16:24.895 ************************************ 00:16:24.895 15:01:43 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:24.895 15:01:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:24.895 15:01:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:24.895 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:16:24.895 ************************************ 00:16:24.895 START TEST nvmf_lvol 00:16:24.895 ************************************ 00:16:24.895 15:01:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:24.895 * Looking for test storage... 00:16:24.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.895 15:01:43 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.895 15:01:43 -- nvmf/common.sh@7 -- # uname -s 00:16:24.895 15:01:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.895 15:01:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.895 15:01:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.895 15:01:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.895 15:01:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.895 15:01:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.895 15:01:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.895 15:01:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.895 15:01:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.895 15:01:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.895 15:01:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:24.895 15:01:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:24.895 15:01:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.895 15:01:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.895 15:01:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.895 15:01:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.895 15:01:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.895 15:01:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.895 15:01:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.895 15:01:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.895 15:01:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.895 15:01:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.895 15:01:43 -- paths/export.sh@5 -- # export PATH 00:16:24.895 15:01:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.895 15:01:43 -- nvmf/common.sh@46 -- # : 0 00:16:24.895 15:01:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:24.895 15:01:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:24.895 15:01:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:24.895 15:01:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.895 15:01:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.895 15:01:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:24.895 15:01:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:24.895 15:01:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:24.895 15:01:43 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.895 15:01:43 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.895 15:01:43 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:24.895 15:01:43 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:24.895 15:01:43 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:24.895 15:01:43 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:24.895 15:01:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:24.895 15:01:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.895 15:01:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:24.895 15:01:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:24.895 15:01:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:24.895 15:01:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.895 15:01:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.895 15:01:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.895 15:01:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:24.895 15:01:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:24.895 15:01:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:24.895 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.638 15:01:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:31.638 15:01:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:31.638 15:01:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:31.638 15:01:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:31.638 15:01:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:31.638 15:01:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:31.638 15:01:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:31.638 15:01:49 -- nvmf/common.sh@294 -- # net_devs=() 00:16:31.638 15:01:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:31.638 15:01:49 -- nvmf/common.sh@295 -- # e810=() 00:16:31.638 15:01:49 -- nvmf/common.sh@295 -- # local -ga e810 00:16:31.638 15:01:49 -- nvmf/common.sh@296 -- # x722=() 00:16:31.638 15:01:49 -- nvmf/common.sh@296 -- # local -ga x722 00:16:31.638 15:01:49 -- nvmf/common.sh@297 -- # mlx=() 00:16:31.638 15:01:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:31.638 15:01:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.638 15:01:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:31.638 15:01:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:31.638 15:01:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:31.638 15:01:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:31.638 15:01:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:31.638 15:01:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:31.638 15:01:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:31.639 15:01:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:31.639 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:31.639 15:01:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:31.639 15:01:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:31.639 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:31.639 15:01:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:31.639 15:01:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:31.639 15:01:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.639 15:01:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:31.639 15:01:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.639 15:01:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:31.639 Found net devices under 0000:af:00.0: cvl_0_0 00:16:31.639 15:01:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.639 15:01:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:31.639 15:01:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.639 15:01:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:31.639 15:01:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.639 15:01:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:31.639 Found net devices under 0000:af:00.1: cvl_0_1 00:16:31.639 15:01:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.639 15:01:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:31.639 15:01:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:31.639 15:01:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:31.639 15:01:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.639 15:01:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.639 15:01:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.639 15:01:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:31.639 15:01:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.639 15:01:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.639 15:01:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:31.639 15:01:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.639 15:01:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.639 15:01:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:31.639 15:01:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:31.639 15:01:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.639 15:01:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.639 15:01:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.639 15:01:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.639 15:01:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:31.639 15:01:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.639 15:01:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.639 15:01:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.639 15:01:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:31.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:16:31.639 00:16:31.639 --- 10.0.0.2 ping statistics --- 00:16:31.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.639 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:16:31.639 15:01:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:16:31.639 00:16:31.639 --- 10.0.0.1 ping statistics --- 00:16:31.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.639 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:16:31.639 15:01:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.639 15:01:49 -- nvmf/common.sh@410 -- # return 0 00:16:31.639 15:01:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:31.639 15:01:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.639 15:01:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:31.639 15:01:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.639 15:01:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:31.639 15:01:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:31.639 15:01:49 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:31.639 15:01:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:31.639 15:01:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:31.639 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:16:31.639 15:01:49 -- nvmf/common.sh@469 -- # nvmfpid=3241823 00:16:31.639 15:01:49 -- nvmf/common.sh@470 -- # waitforlisten 3241823 00:16:31.639 15:01:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:31.639 15:01:49 -- common/autotest_common.sh@819 -- # '[' -z 3241823 ']' 00:16:31.639 15:01:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.639 15:01:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:31.639 15:01:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.639 15:01:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:31.639 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:16:31.639 [2024-06-11 15:01:49.954004] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:31.639 [2024-06-11 15:01:49.954062] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.639 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.639 [2024-06-11 15:01:50.048346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.639 [2024-06-11 15:01:50.139043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:31.639 [2024-06-11 15:01:50.139183] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.639 [2024-06-11 15:01:50.139195] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.639 [2024-06-11 15:01:50.139204] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.639 [2024-06-11 15:01:50.139246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.639 [2024-06-11 15:01:50.139346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.639 [2024-06-11 15:01:50.139347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.210 15:01:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.210 15:01:50 -- common/autotest_common.sh@852 -- # return 0 00:16:32.210 15:01:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:32.210 15:01:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:32.210 15:01:50 -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 15:01:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.210 15:01:50 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:32.469 [2024-06-11 15:01:51.147289] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.469 15:01:51 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:32.728 15:01:51 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:32.728 15:01:51 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:32.987 15:01:51 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:32.987 15:01:51 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:33.246 15:01:51 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:33.505 15:01:52 -- target/nvmf_lvol.sh@29 -- # lvs=43f38083-f237-4e35-b0bf-7c6cd5d34a88 00:16:33.505 15:01:52 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 43f38083-f237-4e35-b0bf-7c6cd5d34a88 lvol 20 00:16:33.762 15:01:52 -- target/nvmf_lvol.sh@32 -- # lvol=5e6da3a1-9b30-4f6e-8f1f-8617535d5f3c 00:16:33.762 15:01:52 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:34.021 15:01:52 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5e6da3a1-9b30-4f6e-8f1f-8617535d5f3c 00:16:34.279 15:01:52 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:34.538 [2024-06-11 15:01:53.197085] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.538 15:01:53 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:34.797 15:01:53 -- target/nvmf_lvol.sh@42 -- # perf_pid=3242418 00:16:34.797 15:01:53 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:34.797 15:01:53 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:34.797 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.734 15:01:54 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5e6da3a1-9b30-4f6e-8f1f-8617535d5f3c MY_SNAPSHOT 00:16:35.993 15:01:54 -- target/nvmf_lvol.sh@47 -- # snapshot=5de0e58a-6fb7-4000-85f4-c5dffff2dc5a 00:16:35.993 15:01:54 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5e6da3a1-9b30-4f6e-8f1f-8617535d5f3c 30 00:16:36.252 15:01:54 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5de0e58a-6fb7-4000-85f4-c5dffff2dc5a MY_CLONE 00:16:36.511 15:01:55 -- target/nvmf_lvol.sh@49 -- # clone=940785ef-3e55-43b8-b143-d5bd2d076736 00:16:36.511 15:01:55 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 940785ef-3e55-43b8-b143-d5bd2d076736 00:16:37.079 15:01:55 -- target/nvmf_lvol.sh@53 -- # wait 3242418 00:16:45.200 Initializing NVMe Controllers 00:16:45.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:45.200 Controller IO queue size 128, less than required. 00:16:45.200 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:45.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:45.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:45.200 Initialization complete. Launching workers. 00:16:45.200 ======================================================== 00:16:45.200 Latency(us) 00:16:45.200 Device Information : IOPS MiB/s Average min max 00:16:45.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9060.40 35.39 14132.45 2041.38 85862.11 00:16:45.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8956.30 34.99 14295.88 3506.44 68681.03 00:16:45.200 ======================================================== 00:16:45.200 Total : 18016.70 70.38 14213.70 2041.38 85862.11 00:16:45.200 00:16:45.200 15:02:03 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:45.459 15:02:04 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5e6da3a1-9b30-4f6e-8f1f-8617535d5f3c 00:16:45.717 15:02:04 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 43f38083-f237-4e35-b0bf-7c6cd5d34a88 00:16:45.717 15:02:04 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:45.717 15:02:04 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:45.717 15:02:04 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:45.717 15:02:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:45.717 15:02:04 -- nvmf/common.sh@116 -- # sync 00:16:45.717 15:02:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:45.717 15:02:04 -- nvmf/common.sh@119 -- # set +e 00:16:45.717 15:02:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:45.717 15:02:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:45.717 rmmod nvme_tcp 00:16:45.717 rmmod nvme_fabrics 00:16:45.976 rmmod nvme_keyring 00:16:45.976 15:02:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:45.976 15:02:04 -- nvmf/common.sh@123 -- # set -e 00:16:45.976 15:02:04 -- nvmf/common.sh@124 -- # return 0 00:16:45.976 15:02:04 -- nvmf/common.sh@477 -- # '[' -n 3241823 ']' 00:16:45.976 15:02:04 -- nvmf/common.sh@478 -- # killprocess 3241823 00:16:45.976 15:02:04 -- common/autotest_common.sh@926 -- # '[' -z 3241823 ']' 00:16:45.976 15:02:04 -- common/autotest_common.sh@930 -- # kill -0 3241823 00:16:45.976 15:02:04 -- common/autotest_common.sh@931 -- # uname 00:16:45.976 15:02:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:45.976 15:02:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3241823 00:16:45.976 15:02:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:45.976 15:02:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:45.976 15:02:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3241823' 00:16:45.976 killing process with pid 3241823 00:16:45.976 15:02:04 -- common/autotest_common.sh@945 -- # kill 3241823 00:16:45.976 15:02:04 -- common/autotest_common.sh@950 -- # wait 3241823 00:16:46.235 15:02:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:46.235 15:02:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:46.235 15:02:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:46.235 15:02:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.235 15:02:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:46.235 15:02:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.235 15:02:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.235 15:02:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.141 15:02:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:48.141 00:16:48.141 real 0m23.698s 00:16:48.141 user 1m7.977s 00:16:48.141 sys 0m7.674s 00:16:48.141 15:02:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:48.141 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:16:48.141 ************************************ 00:16:48.141 END TEST nvmf_lvol 00:16:48.141 ************************************ 00:16:48.401 15:02:07 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:48.401 15:02:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:48.401 15:02:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:48.401 15:02:07 -- common/autotest_common.sh@10 -- # set +x 00:16:48.401 ************************************ 00:16:48.401 START TEST nvmf_lvs_grow 00:16:48.401 ************************************ 00:16:48.401 15:02:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:48.401 * Looking for test storage... 00:16:48.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:48.401 15:02:07 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.401 15:02:07 -- nvmf/common.sh@7 -- # uname -s 00:16:48.401 15:02:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.401 15:02:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.401 15:02:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.401 15:02:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.401 15:02:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.401 15:02:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.401 15:02:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.401 15:02:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.401 15:02:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.401 15:02:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.401 15:02:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:48.401 15:02:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:48.401 15:02:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.401 15:02:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.401 15:02:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.401 15:02:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.401 15:02:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.401 15:02:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.401 15:02:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.401 15:02:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.401 15:02:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.401 15:02:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.401 15:02:07 -- paths/export.sh@5 -- # export PATH 00:16:48.401 15:02:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.401 15:02:07 -- nvmf/common.sh@46 -- # : 0 00:16:48.401 15:02:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:48.401 15:02:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:48.401 15:02:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:48.401 15:02:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.401 15:02:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.401 15:02:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:48.401 15:02:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:48.401 15:02:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:48.401 15:02:07 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:48.401 15:02:07 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.401 15:02:07 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:48.401 15:02:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:48.401 15:02:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.401 15:02:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:48.401 15:02:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:48.401 15:02:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:48.401 15:02:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.401 15:02:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.401 15:02:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.401 15:02:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:48.401 15:02:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:48.401 15:02:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:48.401 15:02:07 -- common/autotest_common.sh@10 -- # set +x 00:16:54.965 15:02:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:54.965 15:02:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:54.965 15:02:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:54.965 15:02:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:54.965 15:02:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:54.965 15:02:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:54.965 15:02:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:54.965 15:02:13 -- nvmf/common.sh@294 -- # net_devs=() 00:16:54.965 15:02:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:54.965 15:02:13 -- nvmf/common.sh@295 -- # e810=() 00:16:54.965 15:02:13 -- nvmf/common.sh@295 -- # local -ga e810 00:16:54.965 15:02:13 -- nvmf/common.sh@296 -- # x722=() 00:16:54.965 15:02:13 -- nvmf/common.sh@296 -- # local -ga x722 00:16:54.965 15:02:13 -- nvmf/common.sh@297 -- # mlx=() 00:16:54.965 15:02:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:54.965 15:02:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.965 15:02:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:54.965 15:02:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:54.965 15:02:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:54.965 15:02:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:54.965 15:02:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:54.965 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:54.965 15:02:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:54.965 15:02:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:54.965 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:54.965 15:02:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:54.965 15:02:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:54.965 15:02:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.965 15:02:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:54.965 15:02:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.965 15:02:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:54.965 Found net devices under 0000:af:00.0: cvl_0_0 00:16:54.965 15:02:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.965 15:02:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:54.965 15:02:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.965 15:02:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:54.965 15:02:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.965 15:02:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:54.965 Found net devices under 0000:af:00.1: cvl_0_1 00:16:54.965 15:02:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.965 15:02:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:54.965 15:02:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:54.965 15:02:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:54.965 15:02:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:54.965 15:02:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.965 15:02:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.965 15:02:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.965 15:02:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:54.965 15:02:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.965 15:02:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.965 15:02:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:54.965 15:02:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.965 15:02:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.965 15:02:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:54.965 15:02:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:54.965 15:02:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.965 15:02:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.965 15:02:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.966 15:02:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.966 15:02:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:54.966 15:02:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.966 15:02:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.966 15:02:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.966 15:02:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:54.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:16:54.966 00:16:54.966 --- 10.0.0.2 ping statistics --- 00:16:54.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.966 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:16:54.966 15:02:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:16:54.966 00:16:54.966 --- 10.0.0.1 ping statistics --- 00:16:54.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.966 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:16:54.966 15:02:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.966 15:02:13 -- nvmf/common.sh@410 -- # return 0 00:16:54.966 15:02:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:54.966 15:02:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.966 15:02:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:54.966 15:02:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:54.966 15:02:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.966 15:02:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:54.966 15:02:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:54.966 15:02:13 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:54.966 15:02:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:54.966 15:02:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:54.966 15:02:13 -- common/autotest_common.sh@10 -- # set +x 00:16:54.966 15:02:13 -- nvmf/common.sh@469 -- # nvmfpid=3248553 00:16:54.966 15:02:13 -- nvmf/common.sh@470 -- # waitforlisten 3248553 00:16:54.966 15:02:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:54.966 15:02:13 -- common/autotest_common.sh@819 -- # '[' -z 3248553 ']' 00:16:54.966 15:02:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.966 15:02:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:54.966 15:02:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.966 15:02:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:54.966 15:02:13 -- common/autotest_common.sh@10 -- # set +x 00:16:54.966 [2024-06-11 15:02:13.768963] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:54.966 [2024-06-11 15:02:13.769016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.225 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.225 [2024-06-11 15:02:13.863682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.225 [2024-06-11 15:02:13.949492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:55.225 [2024-06-11 15:02:13.949636] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.225 [2024-06-11 15:02:13.949648] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.225 [2024-06-11 15:02:13.949658] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.225 [2024-06-11 15:02:13.949687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.159 15:02:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:56.159 15:02:14 -- common/autotest_common.sh@852 -- # return 0 00:16:56.159 15:02:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:56.159 15:02:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:56.159 15:02:14 -- common/autotest_common.sh@10 -- # set +x 00:16:56.159 15:02:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:56.159 [2024-06-11 15:02:14.952105] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:56.159 15:02:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:56.159 15:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:56.159 15:02:14 -- common/autotest_common.sh@10 -- # set +x 00:16:56.159 ************************************ 00:16:56.159 START TEST lvs_grow_clean 00:16:56.159 ************************************ 00:16:56.159 15:02:14 -- common/autotest_common.sh@1104 -- # lvs_grow 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:56.159 15:02:14 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:56.417 15:02:15 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:56.417 15:02:15 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:56.676 15:02:15 -- target/nvmf_lvs_grow.sh@28 -- # lvs=eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:16:56.676 15:02:15 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:16:56.676 15:02:15 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:56.936 15:02:15 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:56.936 15:02:15 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:56.936 15:02:15 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eb2fdf17-907a-4c5f-8919-187c7a15e26e lvol 150 00:16:57.195 15:02:15 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9bb03504-eef0-421b-b5bc-09bcb9c9e65c 00:16:57.195 15:02:15 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:57.195 15:02:15 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:57.454 [2024-06-11 15:02:16.188688] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:57.454 [2024-06-11 15:02:16.188755] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:57.454 true 00:16:57.454 15:02:16 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:16:57.454 15:02:16 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:57.713 15:02:16 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:57.713 15:02:16 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:57.972 15:02:16 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9bb03504-eef0-421b-b5bc-09bcb9c9e65c 00:16:58.232 15:02:16 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:58.491 [2024-06-11 15:02:17.131645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.491 15:02:17 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:58.750 15:02:17 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3249283 00:16:58.750 15:02:17 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:58.750 15:02:17 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:58.750 15:02:17 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3249283 /var/tmp/bdevperf.sock 00:16:58.750 15:02:17 -- common/autotest_common.sh@819 -- # '[' -z 3249283 ']' 00:16:58.750 15:02:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.750 15:02:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:58.750 15:02:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.751 15:02:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:58.751 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:58.751 [2024-06-11 15:02:17.428958] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:58.751 [2024-06-11 15:02:17.429017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249283 ] 00:16:58.751 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.751 [2024-06-11 15:02:17.509614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.010 [2024-06-11 15:02:17.596864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.578 15:02:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:59.578 15:02:18 -- common/autotest_common.sh@852 -- # return 0 00:16:59.578 15:02:18 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:00.147 Nvme0n1 00:17:00.147 15:02:18 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:00.406 [ 00:17:00.406 { 00:17:00.406 "name": "Nvme0n1", 00:17:00.406 "aliases": [ 00:17:00.406 "9bb03504-eef0-421b-b5bc-09bcb9c9e65c" 00:17:00.406 ], 00:17:00.406 "product_name": "NVMe disk", 00:17:00.406 "block_size": 4096, 00:17:00.406 "num_blocks": 38912, 00:17:00.406 "uuid": "9bb03504-eef0-421b-b5bc-09bcb9c9e65c", 00:17:00.406 "assigned_rate_limits": { 00:17:00.406 "rw_ios_per_sec": 0, 00:17:00.406 "rw_mbytes_per_sec": 0, 00:17:00.406 "r_mbytes_per_sec": 0, 00:17:00.406 "w_mbytes_per_sec": 0 00:17:00.406 }, 00:17:00.406 "claimed": false, 00:17:00.406 "zoned": false, 00:17:00.406 "supported_io_types": { 00:17:00.406 "read": true, 00:17:00.406 "write": true, 00:17:00.406 "unmap": true, 00:17:00.406 "write_zeroes": true, 00:17:00.406 "flush": true, 00:17:00.406 "reset": true, 00:17:00.406 "compare": true, 00:17:00.406 "compare_and_write": true, 00:17:00.406 "abort": true, 00:17:00.406 "nvme_admin": true, 00:17:00.406 "nvme_io": true 00:17:00.406 }, 00:17:00.406 "driver_specific": { 00:17:00.406 "nvme": [ 00:17:00.406 { 00:17:00.406 "trid": { 00:17:00.406 "trtype": "TCP", 00:17:00.406 "adrfam": "IPv4", 00:17:00.406 "traddr": "10.0.0.2", 00:17:00.406 "trsvcid": "4420", 00:17:00.406 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:00.406 }, 00:17:00.406 "ctrlr_data": { 00:17:00.406 "cntlid": 1, 00:17:00.406 "vendor_id": "0x8086", 00:17:00.406 "model_number": "SPDK bdev Controller", 00:17:00.406 "serial_number": "SPDK0", 00:17:00.406 "firmware_revision": "24.01.1", 00:17:00.406 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:00.406 "oacs": { 00:17:00.406 "security": 0, 00:17:00.406 "format": 0, 00:17:00.406 "firmware": 0, 00:17:00.406 "ns_manage": 0 00:17:00.406 }, 00:17:00.406 "multi_ctrlr": true, 00:17:00.406 "ana_reporting": false 00:17:00.406 }, 00:17:00.407 "vs": { 00:17:00.407 "nvme_version": "1.3" 00:17:00.407 }, 00:17:00.407 "ns_data": { 00:17:00.407 "id": 1, 00:17:00.407 "can_share": true 00:17:00.407 } 00:17:00.407 } 00:17:00.407 ], 00:17:00.407 "mp_policy": "active_passive" 00:17:00.407 } 00:17:00.407 } 00:17:00.407 ] 00:17:00.407 15:02:19 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3249656 00:17:00.407 15:02:19 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:00.407 15:02:19 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:00.407 Running I/O for 10 seconds... 00:17:01.344 Latency(us) 00:17:01.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.344 Nvme0n1 : 1.00 15457.00 60.38 0.00 0.00 0.00 0.00 0.00 00:17:01.344 =================================================================================================================== 00:17:01.344 Total : 15457.00 60.38 0.00 0.00 0.00 0.00 0.00 00:17:01.344 00:17:02.288 15:02:21 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:17:02.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.547 Nvme0n1 : 2.00 15600.00 60.94 0.00 0.00 0.00 0.00 0.00 00:17:02.547 =================================================================================================================== 00:17:02.547 Total : 15600.00 60.94 0.00 0.00 0.00 0.00 0.00 00:17:02.547 00:17:02.547 true 00:17:02.547 15:02:21 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:17:02.547 15:02:21 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:02.806 15:02:21 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:02.806 15:02:21 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:02.806 15:02:21 -- target/nvmf_lvs_grow.sh@65 -- # wait 3249656 00:17:03.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.374 Nvme0n1 : 3.00 15633.00 61.07 0.00 0.00 0.00 0.00 0.00 00:17:03.374 =================================================================================================================== 00:17:03.374 Total : 15633.00 61.07 0.00 0.00 0.00 0.00 0.00 00:17:03.374 00:17:04.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.311 Nvme0n1 : 4.00 15672.00 61.22 0.00 0.00 0.00 0.00 0.00 00:17:04.311 =================================================================================================================== 00:17:04.311 Total : 15672.00 61.22 0.00 0.00 0.00 0.00 0.00 00:17:04.311 00:17:05.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.690 Nvme0n1 : 5.00 15711.60 61.37 0.00 0.00 0.00 0.00 0.00 00:17:05.690 =================================================================================================================== 00:17:05.690 Total : 15711.60 61.37 0.00 0.00 0.00 0.00 0.00 00:17:05.690 00:17:06.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.626 Nvme0n1 : 6.00 15727.83 61.44 0.00 0.00 0.00 0.00 0.00 00:17:06.626 =================================================================================================================== 00:17:06.626 Total : 15727.83 61.44 0.00 0.00 0.00 0.00 0.00 00:17:06.626 00:17:07.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.569 Nvme0n1 : 7.00 15739.14 61.48 0.00 0.00 0.00 0.00 0.00 00:17:07.569 =================================================================================================================== 00:17:07.569 Total : 15739.14 61.48 0.00 0.00 0.00 0.00 0.00 00:17:07.569 00:17:08.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.507 Nvme0n1 : 8.00 15755.62 61.55 0.00 0.00 0.00 0.00 0.00 00:17:08.507 =================================================================================================================== 00:17:08.507 Total : 15755.62 61.55 0.00 0.00 0.00 0.00 0.00 00:17:08.507 00:17:09.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.443 Nvme0n1 : 9.00 15768.56 61.60 0.00 0.00 0.00 0.00 0.00 00:17:09.443 =================================================================================================================== 00:17:09.443 Total : 15768.56 61.60 0.00 0.00 0.00 0.00 0.00 00:17:09.443 00:17:10.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.379 Nvme0n1 : 10.00 15778.90 61.64 0.00 0.00 0.00 0.00 0.00 00:17:10.379 =================================================================================================================== 00:17:10.379 Total : 15778.90 61.64 0.00 0.00 0.00 0.00 0.00 00:17:10.379 00:17:10.379 00:17:10.379 Latency(us) 00:17:10.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.379 Nvme0n1 : 10.01 15778.22 61.63 0.00 0.00 8107.17 5064.15 16443.58 00:17:10.379 =================================================================================================================== 00:17:10.379 Total : 15778.22 61.63 0.00 0.00 8107.17 5064.15 16443.58 00:17:10.379 0 00:17:10.379 15:02:29 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3249283 00:17:10.379 15:02:29 -- common/autotest_common.sh@926 -- # '[' -z 3249283 ']' 00:17:10.379 15:02:29 -- common/autotest_common.sh@930 -- # kill -0 3249283 00:17:10.379 15:02:29 -- common/autotest_common.sh@931 -- # uname 00:17:10.379 15:02:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:10.379 15:02:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3249283 00:17:10.639 15:02:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:10.639 15:02:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:10.639 15:02:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3249283' 00:17:10.639 killing process with pid 3249283 00:17:10.639 15:02:29 -- common/autotest_common.sh@945 -- # kill 3249283 00:17:10.639 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.639 00:17:10.639 Latency(us) 00:17:10.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.639 =================================================================================================================== 00:17:10.639 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.639 15:02:29 -- common/autotest_common.sh@950 -- # wait 3249283 00:17:10.639 15:02:29 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:10.898 15:02:29 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:17:10.898 15:02:29 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:11.156 15:02:29 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:11.156 15:02:29 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:11.156 15:02:29 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:11.415 [2024-06-11 15:02:30.165193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:11.415 15:02:30 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:17:11.415 15:02:30 -- common/autotest_common.sh@640 -- # local es=0 00:17:11.415 15:02:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:17:11.415 15:02:30 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.415 15:02:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.415 15:02:30 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.415 15:02:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.415 15:02:30 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.415 15:02:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.415 15:02:30 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.415 15:02:30 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:11.415 15:02:30 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:17:11.674 request: 00:17:11.674 { 00:17:11.674 "uuid": "eb2fdf17-907a-4c5f-8919-187c7a15e26e", 00:17:11.674 "method": "bdev_lvol_get_lvstores", 00:17:11.674 "req_id": 1 00:17:11.674 } 00:17:11.674 Got JSON-RPC error response 00:17:11.674 response: 00:17:11.674 { 00:17:11.674 "code": -19, 00:17:11.674 "message": "No such device" 00:17:11.674 } 00:17:11.674 15:02:30 -- common/autotest_common.sh@643 -- # es=1 00:17:11.674 15:02:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:11.674 15:02:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:11.674 15:02:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:11.674 15:02:30 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:11.933 aio_bdev 00:17:11.933 15:02:30 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9bb03504-eef0-421b-b5bc-09bcb9c9e65c 00:17:11.933 15:02:30 -- common/autotest_common.sh@887 -- # local bdev_name=9bb03504-eef0-421b-b5bc-09bcb9c9e65c 00:17:11.933 15:02:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:11.933 15:02:30 -- common/autotest_common.sh@889 -- # local i 00:17:11.933 15:02:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:11.933 15:02:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:11.933 15:02:30 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:12.192 15:02:30 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9bb03504-eef0-421b-b5bc-09bcb9c9e65c -t 2000 00:17:12.451 [ 00:17:12.451 { 00:17:12.451 "name": "9bb03504-eef0-421b-b5bc-09bcb9c9e65c", 00:17:12.451 "aliases": [ 00:17:12.451 "lvs/lvol" 00:17:12.451 ], 00:17:12.451 "product_name": "Logical Volume", 00:17:12.451 "block_size": 4096, 00:17:12.451 "num_blocks": 38912, 00:17:12.451 "uuid": "9bb03504-eef0-421b-b5bc-09bcb9c9e65c", 00:17:12.451 "assigned_rate_limits": { 00:17:12.451 "rw_ios_per_sec": 0, 00:17:12.451 "rw_mbytes_per_sec": 0, 00:17:12.451 "r_mbytes_per_sec": 0, 00:17:12.451 "w_mbytes_per_sec": 0 00:17:12.451 }, 00:17:12.451 "claimed": false, 00:17:12.451 "zoned": false, 00:17:12.451 "supported_io_types": { 00:17:12.451 "read": true, 00:17:12.451 "write": true, 00:17:12.451 "unmap": true, 00:17:12.451 "write_zeroes": true, 00:17:12.451 "flush": false, 00:17:12.451 "reset": true, 00:17:12.451 "compare": false, 00:17:12.451 "compare_and_write": false, 00:17:12.451 "abort": false, 00:17:12.451 "nvme_admin": false, 00:17:12.451 "nvme_io": false 00:17:12.451 }, 00:17:12.451 "driver_specific": { 00:17:12.451 "lvol": { 00:17:12.451 "lvol_store_uuid": "eb2fdf17-907a-4c5f-8919-187c7a15e26e", 00:17:12.451 "base_bdev": "aio_bdev", 00:17:12.451 "thin_provision": false, 00:17:12.451 "snapshot": false, 00:17:12.451 "clone": false, 00:17:12.451 "esnap_clone": false 00:17:12.451 } 00:17:12.451 } 00:17:12.451 } 00:17:12.451 ] 00:17:12.451 15:02:31 -- common/autotest_common.sh@895 -- # return 0 00:17:12.451 15:02:31 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:17:12.451 15:02:31 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:12.710 15:02:31 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:12.710 15:02:31 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:17:12.710 15:02:31 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:12.970 15:02:31 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:12.970 15:02:31 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9bb03504-eef0-421b-b5bc-09bcb9c9e65c 00:17:13.228 15:02:31 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb2fdf17-907a-4c5f-8919-187c7a15e26e 00:17:13.501 15:02:32 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:13.762 00:17:13.762 real 0m17.403s 00:17:13.762 user 0m17.332s 00:17:13.762 sys 0m1.510s 00:17:13.762 15:02:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.762 15:02:32 -- common/autotest_common.sh@10 -- # set +x 00:17:13.762 ************************************ 00:17:13.762 END TEST lvs_grow_clean 00:17:13.762 ************************************ 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:13.762 15:02:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:13.762 15:02:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:13.762 15:02:32 -- common/autotest_common.sh@10 -- # set +x 00:17:13.762 ************************************ 00:17:13.762 START TEST lvs_grow_dirty 00:17:13.762 ************************************ 00:17:13.762 15:02:32 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:13.762 15:02:32 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:14.021 15:02:32 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:14.021 15:02:32 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:14.279 15:02:32 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8b12d249-032e-49a0-b320-63493f40d9f3 00:17:14.279 15:02:32 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:14.279 15:02:32 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:14.538 15:02:33 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:14.538 15:02:33 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:14.538 15:02:33 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8b12d249-032e-49a0-b320-63493f40d9f3 lvol 150 00:17:14.796 15:02:33 -- target/nvmf_lvs_grow.sh@33 -- # lvol=bf3b68eb-4764-49a6-8037-72ddf9c35bb1 00:17:14.796 15:02:33 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:14.796 15:02:33 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:14.796 [2024-06-11 15:02:33.614595] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:14.796 [2024-06-11 15:02:33.614659] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:14.796 true 00:17:14.796 15:02:33 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:14.796 15:02:33 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:15.055 15:02:33 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:15.056 15:02:33 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:15.315 15:02:34 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bf3b68eb-4764-49a6-8037-72ddf9c35bb1 00:17:15.574 15:02:34 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:15.832 15:02:34 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:16.092 15:02:34 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3252405 00:17:16.092 15:02:34 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.092 15:02:34 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:16.092 15:02:34 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3252405 /var/tmp/bdevperf.sock 00:17:16.092 15:02:34 -- common/autotest_common.sh@819 -- # '[' -z 3252405 ']' 00:17:16.092 15:02:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.092 15:02:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:16.092 15:02:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.092 15:02:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:16.092 15:02:34 -- common/autotest_common.sh@10 -- # set +x 00:17:16.092 [2024-06-11 15:02:34.866764] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:16.092 [2024-06-11 15:02:34.866825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252405 ] 00:17:16.092 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.351 [2024-06-11 15:02:34.947492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.351 [2024-06-11 15:02:35.033774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.287 15:02:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:17.287 15:02:35 -- common/autotest_common.sh@852 -- # return 0 00:17:17.287 15:02:35 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:17.546 Nvme0n1 00:17:17.546 15:02:36 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:17.805 [ 00:17:17.805 { 00:17:17.805 "name": "Nvme0n1", 00:17:17.805 "aliases": [ 00:17:17.805 "bf3b68eb-4764-49a6-8037-72ddf9c35bb1" 00:17:17.805 ], 00:17:17.805 "product_name": "NVMe disk", 00:17:17.805 "block_size": 4096, 00:17:17.805 "num_blocks": 38912, 00:17:17.805 "uuid": "bf3b68eb-4764-49a6-8037-72ddf9c35bb1", 00:17:17.806 "assigned_rate_limits": { 00:17:17.806 "rw_ios_per_sec": 0, 00:17:17.806 "rw_mbytes_per_sec": 0, 00:17:17.806 "r_mbytes_per_sec": 0, 00:17:17.806 "w_mbytes_per_sec": 0 00:17:17.806 }, 00:17:17.806 "claimed": false, 00:17:17.806 "zoned": false, 00:17:17.806 "supported_io_types": { 00:17:17.806 "read": true, 00:17:17.806 "write": true, 00:17:17.806 "unmap": true, 00:17:17.806 "write_zeroes": true, 00:17:17.806 "flush": true, 00:17:17.806 "reset": true, 00:17:17.806 "compare": true, 00:17:17.806 "compare_and_write": true, 00:17:17.806 "abort": true, 00:17:17.806 "nvme_admin": true, 00:17:17.806 "nvme_io": true 00:17:17.806 }, 00:17:17.806 "driver_specific": { 00:17:17.806 "nvme": [ 00:17:17.806 { 00:17:17.806 "trid": { 00:17:17.806 "trtype": "TCP", 00:17:17.806 "adrfam": "IPv4", 00:17:17.806 "traddr": "10.0.0.2", 00:17:17.806 "trsvcid": "4420", 00:17:17.806 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:17.806 }, 00:17:17.806 "ctrlr_data": { 00:17:17.806 "cntlid": 1, 00:17:17.806 "vendor_id": "0x8086", 00:17:17.806 "model_number": "SPDK bdev Controller", 00:17:17.806 "serial_number": "SPDK0", 00:17:17.806 "firmware_revision": "24.01.1", 00:17:17.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.806 "oacs": { 00:17:17.806 "security": 0, 00:17:17.806 "format": 0, 00:17:17.806 "firmware": 0, 00:17:17.806 "ns_manage": 0 00:17:17.806 }, 00:17:17.806 "multi_ctrlr": true, 00:17:17.806 "ana_reporting": false 00:17:17.806 }, 00:17:17.806 "vs": { 00:17:17.806 "nvme_version": "1.3" 00:17:17.806 }, 00:17:17.806 "ns_data": { 00:17:17.806 "id": 1, 00:17:17.806 "can_share": true 00:17:17.806 } 00:17:17.806 } 00:17:17.806 ], 00:17:17.806 "mp_policy": "active_passive" 00:17:17.806 } 00:17:17.806 } 00:17:17.806 ] 00:17:17.806 15:02:36 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3252776 00:17:17.806 15:02:36 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:17.806 15:02:36 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:17.806 Running I/O for 10 seconds... 00:17:18.751 Latency(us) 00:17:18.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.751 Nvme0n1 : 1.00 15508.00 60.58 0.00 0.00 0.00 0.00 0.00 00:17:18.751 =================================================================================================================== 00:17:18.751 Total : 15508.00 60.58 0.00 0.00 0.00 0.00 0.00 00:17:18.751 00:17:19.691 15:02:38 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:19.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.950 Nvme0n1 : 2.00 15625.50 61.04 0.00 0.00 0.00 0.00 0.00 00:17:19.950 =================================================================================================================== 00:17:19.950 Total : 15625.50 61.04 0.00 0.00 0.00 0.00 0.00 00:17:19.950 00:17:19.950 true 00:17:19.950 15:02:38 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:19.950 15:02:38 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:20.210 15:02:38 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:20.210 15:02:38 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:20.210 15:02:38 -- target/nvmf_lvs_grow.sh@65 -- # wait 3252776 00:17:20.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.779 Nvme0n1 : 3.00 15669.33 61.21 0.00 0.00 0.00 0.00 0.00 00:17:20.779 =================================================================================================================== 00:17:20.779 Total : 15669.33 61.21 0.00 0.00 0.00 0.00 0.00 00:17:20.779 00:17:22.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.158 Nvme0n1 : 4.00 15688.00 61.28 0.00 0.00 0.00 0.00 0.00 00:17:22.158 =================================================================================================================== 00:17:22.158 Total : 15688.00 61.28 0.00 0.00 0.00 0.00 0.00 00:17:22.158 00:17:23.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.098 Nvme0n1 : 5.00 15712.20 61.38 0.00 0.00 0.00 0.00 0.00 00:17:23.098 =================================================================================================================== 00:17:23.098 Total : 15712.20 61.38 0.00 0.00 0.00 0.00 0.00 00:17:23.098 00:17:24.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.096 Nvme0n1 : 6.00 15727.83 61.44 0.00 0.00 0.00 0.00 0.00 00:17:24.096 =================================================================================================================== 00:17:24.096 Total : 15727.83 61.44 0.00 0.00 0.00 0.00 0.00 00:17:24.096 00:17:25.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.033 Nvme0n1 : 7.00 15748.43 61.52 0.00 0.00 0.00 0.00 0.00 00:17:25.033 =================================================================================================================== 00:17:25.033 Total : 15748.43 61.52 0.00 0.00 0.00 0.00 0.00 00:17:25.033 00:17:25.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.969 Nvme0n1 : 8.00 15756.12 61.55 0.00 0.00 0.00 0.00 0.00 00:17:25.969 =================================================================================================================== 00:17:25.969 Total : 15756.12 61.55 0.00 0.00 0.00 0.00 0.00 00:17:25.969 00:17:26.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.907 Nvme0n1 : 9.00 15770.78 61.60 0.00 0.00 0.00 0.00 0.00 00:17:26.907 =================================================================================================================== 00:17:26.907 Total : 15770.78 61.60 0.00 0.00 0.00 0.00 0.00 00:17:26.907 00:17:27.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.844 Nvme0n1 : 10.00 15785.50 61.66 0.00 0.00 0.00 0.00 0.00 00:17:27.844 =================================================================================================================== 00:17:27.844 Total : 15785.50 61.66 0.00 0.00 0.00 0.00 0.00 00:17:27.844 00:17:27.844 00:17:27.844 Latency(us) 00:17:27.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.844 Nvme0n1 : 10.01 15786.70 61.67 0.00 0.00 8102.66 4974.78 16086.11 00:17:27.844 =================================================================================================================== 00:17:27.844 Total : 15786.70 61.67 0.00 0.00 8102.66 4974.78 16086.11 00:17:27.844 0 00:17:27.844 15:02:46 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3252405 00:17:27.844 15:02:46 -- common/autotest_common.sh@926 -- # '[' -z 3252405 ']' 00:17:27.844 15:02:46 -- common/autotest_common.sh@930 -- # kill -0 3252405 00:17:27.844 15:02:46 -- common/autotest_common.sh@931 -- # uname 00:17:27.844 15:02:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:27.844 15:02:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3252405 00:17:27.844 15:02:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:27.844 15:02:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:27.844 15:02:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3252405' 00:17:27.844 killing process with pid 3252405 00:17:27.844 15:02:46 -- common/autotest_common.sh@945 -- # kill 3252405 00:17:27.844 Received shutdown signal, test time was about 10.000000 seconds 00:17:27.844 00:17:27.844 Latency(us) 00:17:27.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.844 =================================================================================================================== 00:17:27.844 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.844 15:02:46 -- common/autotest_common.sh@950 -- # wait 3252405 00:17:28.103 15:02:46 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:28.362 15:02:47 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:28.362 15:02:47 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:28.621 15:02:47 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:28.621 15:02:47 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:28.621 15:02:47 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3248553 00:17:28.621 15:02:47 -- target/nvmf_lvs_grow.sh@74 -- # wait 3248553 00:17:28.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3248553 Killed "${NVMF_APP[@]}" "$@" 00:17:28.621 15:02:47 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:28.621 15:02:47 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:28.621 15:02:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:28.621 15:02:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:28.621 15:02:47 -- common/autotest_common.sh@10 -- # set +x 00:17:28.621 15:02:47 -- nvmf/common.sh@469 -- # nvmfpid=3254795 00:17:28.621 15:02:47 -- nvmf/common.sh@470 -- # waitforlisten 3254795 00:17:28.621 15:02:47 -- common/autotest_common.sh@819 -- # '[' -z 3254795 ']' 00:17:28.621 15:02:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.621 15:02:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:28.621 15:02:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.621 15:02:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:28.621 15:02:47 -- common/autotest_common.sh@10 -- # set +x 00:17:28.621 15:02:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:28.881 [2024-06-11 15:02:47.487528] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:28.881 [2024-06-11 15:02:47.487585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.881 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.881 [2024-06-11 15:02:47.582696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.881 [2024-06-11 15:02:47.670861] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:28.881 [2024-06-11 15:02:47.670998] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.881 [2024-06-11 15:02:47.671010] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.881 [2024-06-11 15:02:47.671019] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.881 [2024-06-11 15:02:47.671048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.819 15:02:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:29.819 15:02:48 -- common/autotest_common.sh@852 -- # return 0 00:17:29.819 15:02:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:29.819 15:02:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:29.819 15:02:48 -- common/autotest_common.sh@10 -- # set +x 00:17:29.819 15:02:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.819 15:02:48 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:30.079 [2024-06-11 15:02:48.664129] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:30.079 [2024-06-11 15:02:48.664234] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:30.079 [2024-06-11 15:02:48.664273] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:30.079 15:02:48 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:30.079 15:02:48 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev bf3b68eb-4764-49a6-8037-72ddf9c35bb1 00:17:30.079 15:02:48 -- common/autotest_common.sh@887 -- # local bdev_name=bf3b68eb-4764-49a6-8037-72ddf9c35bb1 00:17:30.079 15:02:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:30.079 15:02:48 -- common/autotest_common.sh@889 -- # local i 00:17:30.079 15:02:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:30.079 15:02:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:30.079 15:02:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:30.079 15:02:48 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bf3b68eb-4764-49a6-8037-72ddf9c35bb1 -t 2000 00:17:30.338 [ 00:17:30.338 { 00:17:30.338 "name": "bf3b68eb-4764-49a6-8037-72ddf9c35bb1", 00:17:30.338 "aliases": [ 00:17:30.338 "lvs/lvol" 00:17:30.338 ], 00:17:30.338 "product_name": "Logical Volume", 00:17:30.338 "block_size": 4096, 00:17:30.338 "num_blocks": 38912, 00:17:30.338 "uuid": "bf3b68eb-4764-49a6-8037-72ddf9c35bb1", 00:17:30.338 "assigned_rate_limits": { 00:17:30.338 "rw_ios_per_sec": 0, 00:17:30.338 "rw_mbytes_per_sec": 0, 00:17:30.338 "r_mbytes_per_sec": 0, 00:17:30.338 "w_mbytes_per_sec": 0 00:17:30.338 }, 00:17:30.338 "claimed": false, 00:17:30.338 "zoned": false, 00:17:30.338 "supported_io_types": { 00:17:30.338 "read": true, 00:17:30.338 "write": true, 00:17:30.338 "unmap": true, 00:17:30.338 "write_zeroes": true, 00:17:30.338 "flush": false, 00:17:30.338 "reset": true, 00:17:30.338 "compare": false, 00:17:30.338 "compare_and_write": false, 00:17:30.338 "abort": false, 00:17:30.338 "nvme_admin": false, 00:17:30.338 "nvme_io": false 00:17:30.338 }, 00:17:30.338 "driver_specific": { 00:17:30.338 "lvol": { 00:17:30.338 "lvol_store_uuid": "8b12d249-032e-49a0-b320-63493f40d9f3", 00:17:30.338 "base_bdev": "aio_bdev", 00:17:30.338 "thin_provision": false, 00:17:30.338 "snapshot": false, 00:17:30.338 "clone": false, 00:17:30.338 "esnap_clone": false 00:17:30.338 } 00:17:30.338 } 00:17:30.338 } 00:17:30.338 ] 00:17:30.338 15:02:49 -- common/autotest_common.sh@895 -- # return 0 00:17:30.338 15:02:49 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:30.338 15:02:49 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:30.598 15:02:49 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:30.598 15:02:49 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:30.598 15:02:49 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:30.857 15:02:49 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:30.857 15:02:49 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:31.116 [2024-06-11 15:02:49.852809] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:31.116 15:02:49 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:31.116 15:02:49 -- common/autotest_common.sh@640 -- # local es=0 00:17:31.116 15:02:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:31.116 15:02:49 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.116 15:02:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:31.116 15:02:49 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.116 15:02:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:31.116 15:02:49 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.116 15:02:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:31.116 15:02:49 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.116 15:02:49 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:31.116 15:02:49 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:31.375 request: 00:17:31.375 { 00:17:31.375 "uuid": "8b12d249-032e-49a0-b320-63493f40d9f3", 00:17:31.375 "method": "bdev_lvol_get_lvstores", 00:17:31.375 "req_id": 1 00:17:31.375 } 00:17:31.375 Got JSON-RPC error response 00:17:31.375 response: 00:17:31.375 { 00:17:31.375 "code": -19, 00:17:31.375 "message": "No such device" 00:17:31.375 } 00:17:31.375 15:02:50 -- common/autotest_common.sh@643 -- # es=1 00:17:31.375 15:02:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:31.375 15:02:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:31.375 15:02:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:31.375 15:02:50 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:31.634 aio_bdev 00:17:31.634 15:02:50 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev bf3b68eb-4764-49a6-8037-72ddf9c35bb1 00:17:31.634 15:02:50 -- common/autotest_common.sh@887 -- # local bdev_name=bf3b68eb-4764-49a6-8037-72ddf9c35bb1 00:17:31.634 15:02:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:31.634 15:02:50 -- common/autotest_common.sh@889 -- # local i 00:17:31.634 15:02:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:31.634 15:02:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:31.634 15:02:50 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:31.893 15:02:50 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bf3b68eb-4764-49a6-8037-72ddf9c35bb1 -t 2000 00:17:32.152 [ 00:17:32.152 { 00:17:32.152 "name": "bf3b68eb-4764-49a6-8037-72ddf9c35bb1", 00:17:32.152 "aliases": [ 00:17:32.152 "lvs/lvol" 00:17:32.152 ], 00:17:32.152 "product_name": "Logical Volume", 00:17:32.152 "block_size": 4096, 00:17:32.152 "num_blocks": 38912, 00:17:32.152 "uuid": "bf3b68eb-4764-49a6-8037-72ddf9c35bb1", 00:17:32.152 "assigned_rate_limits": { 00:17:32.152 "rw_ios_per_sec": 0, 00:17:32.152 "rw_mbytes_per_sec": 0, 00:17:32.152 "r_mbytes_per_sec": 0, 00:17:32.152 "w_mbytes_per_sec": 0 00:17:32.152 }, 00:17:32.152 "claimed": false, 00:17:32.152 "zoned": false, 00:17:32.152 "supported_io_types": { 00:17:32.152 "read": true, 00:17:32.152 "write": true, 00:17:32.152 "unmap": true, 00:17:32.152 "write_zeroes": true, 00:17:32.152 "flush": false, 00:17:32.152 "reset": true, 00:17:32.152 "compare": false, 00:17:32.152 "compare_and_write": false, 00:17:32.152 "abort": false, 00:17:32.152 "nvme_admin": false, 00:17:32.152 "nvme_io": false 00:17:32.152 }, 00:17:32.152 "driver_specific": { 00:17:32.152 "lvol": { 00:17:32.152 "lvol_store_uuid": "8b12d249-032e-49a0-b320-63493f40d9f3", 00:17:32.152 "base_bdev": "aio_bdev", 00:17:32.152 "thin_provision": false, 00:17:32.152 "snapshot": false, 00:17:32.152 "clone": false, 00:17:32.152 "esnap_clone": false 00:17:32.152 } 00:17:32.152 } 00:17:32.152 } 00:17:32.152 ] 00:17:32.152 15:02:50 -- common/autotest_common.sh@895 -- # return 0 00:17:32.152 15:02:50 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:32.152 15:02:50 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:32.411 15:02:51 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:32.411 15:02:51 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:32.411 15:02:51 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:32.670 15:02:51 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:32.670 15:02:51 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bf3b68eb-4764-49a6-8037-72ddf9c35bb1 00:17:32.929 15:02:51 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b12d249-032e-49a0-b320-63493f40d9f3 00:17:33.189 15:02:51 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:33.189 15:02:52 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:33.459 00:17:33.459 real 0m19.626s 00:17:33.459 user 0m50.472s 00:17:33.459 sys 0m3.579s 00:17:33.459 15:02:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.459 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:17:33.459 ************************************ 00:17:33.459 END TEST lvs_grow_dirty 00:17:33.459 ************************************ 00:17:33.459 15:02:52 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:33.459 15:02:52 -- common/autotest_common.sh@796 -- # type=--id 00:17:33.459 15:02:52 -- common/autotest_common.sh@797 -- # id=0 00:17:33.459 15:02:52 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:33.459 15:02:52 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:33.459 15:02:52 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:33.459 15:02:52 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:33.459 15:02:52 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:33.459 15:02:52 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:33.459 nvmf_trace.0 00:17:33.459 15:02:52 -- common/autotest_common.sh@811 -- # return 0 00:17:33.459 15:02:52 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:33.459 15:02:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:33.459 15:02:52 -- nvmf/common.sh@116 -- # sync 00:17:33.459 15:02:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:33.459 15:02:52 -- nvmf/common.sh@119 -- # set +e 00:17:33.459 15:02:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:33.459 15:02:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:33.459 rmmod nvme_tcp 00:17:33.459 rmmod nvme_fabrics 00:17:33.459 rmmod nvme_keyring 00:17:33.459 15:02:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:33.459 15:02:52 -- nvmf/common.sh@123 -- # set -e 00:17:33.459 15:02:52 -- nvmf/common.sh@124 -- # return 0 00:17:33.459 15:02:52 -- nvmf/common.sh@477 -- # '[' -n 3254795 ']' 00:17:33.459 15:02:52 -- nvmf/common.sh@478 -- # killprocess 3254795 00:17:33.459 15:02:52 -- common/autotest_common.sh@926 -- # '[' -z 3254795 ']' 00:17:33.459 15:02:52 -- common/autotest_common.sh@930 -- # kill -0 3254795 00:17:33.459 15:02:52 -- common/autotest_common.sh@931 -- # uname 00:17:33.459 15:02:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:33.459 15:02:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3254795 00:17:33.459 15:02:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:33.459 15:02:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:33.459 15:02:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3254795' 00:17:33.459 killing process with pid 3254795 00:17:33.459 15:02:52 -- common/autotest_common.sh@945 -- # kill 3254795 00:17:33.459 15:02:52 -- common/autotest_common.sh@950 -- # wait 3254795 00:17:33.717 15:02:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:33.717 15:02:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:33.717 15:02:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:33.717 15:02:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.717 15:02:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:33.717 15:02:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.717 15:02:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.717 15:02:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.254 15:02:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:36.254 00:17:36.254 real 0m47.509s 00:17:36.254 user 1m15.059s 00:17:36.254 sys 0m10.520s 00:17:36.254 15:02:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:36.254 15:02:54 -- common/autotest_common.sh@10 -- # set +x 00:17:36.254 ************************************ 00:17:36.254 END TEST nvmf_lvs_grow 00:17:36.254 ************************************ 00:17:36.254 15:02:54 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:36.254 15:02:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:36.254 15:02:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:36.254 15:02:54 -- common/autotest_common.sh@10 -- # set +x 00:17:36.254 ************************************ 00:17:36.254 START TEST nvmf_bdev_io_wait 00:17:36.254 ************************************ 00:17:36.254 15:02:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:36.254 * Looking for test storage... 00:17:36.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.254 15:02:54 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.254 15:02:54 -- nvmf/common.sh@7 -- # uname -s 00:17:36.254 15:02:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.254 15:02:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.254 15:02:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.254 15:02:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.254 15:02:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.254 15:02:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.254 15:02:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.254 15:02:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.254 15:02:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.254 15:02:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.254 15:02:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:36.254 15:02:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:36.254 15:02:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.254 15:02:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.254 15:02:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.254 15:02:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.254 15:02:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.254 15:02:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.254 15:02:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.254 15:02:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.254 15:02:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.254 15:02:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.254 15:02:54 -- paths/export.sh@5 -- # export PATH 00:17:36.254 15:02:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.254 15:02:54 -- nvmf/common.sh@46 -- # : 0 00:17:36.254 15:02:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:36.254 15:02:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:36.254 15:02:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:36.254 15:02:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.254 15:02:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.254 15:02:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:36.254 15:02:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:36.254 15:02:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:36.254 15:02:54 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.254 15:02:54 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.254 15:02:54 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:36.254 15:02:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:36.254 15:02:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.254 15:02:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:36.254 15:02:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:36.254 15:02:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:36.254 15:02:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.254 15:02:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.254 15:02:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.254 15:02:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:36.254 15:02:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:36.254 15:02:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:36.254 15:02:54 -- common/autotest_common.sh@10 -- # set +x 00:17:42.819 15:03:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:42.819 15:03:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:42.819 15:03:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:42.819 15:03:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:42.819 15:03:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:42.819 15:03:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:42.819 15:03:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:42.819 15:03:00 -- nvmf/common.sh@294 -- # net_devs=() 00:17:42.819 15:03:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:42.819 15:03:00 -- nvmf/common.sh@295 -- # e810=() 00:17:42.819 15:03:00 -- nvmf/common.sh@295 -- # local -ga e810 00:17:42.819 15:03:00 -- nvmf/common.sh@296 -- # x722=() 00:17:42.819 15:03:00 -- nvmf/common.sh@296 -- # local -ga x722 00:17:42.819 15:03:00 -- nvmf/common.sh@297 -- # mlx=() 00:17:42.819 15:03:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:42.819 15:03:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.819 15:03:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:42.819 15:03:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:42.819 15:03:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:42.819 15:03:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:42.819 15:03:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:42.819 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:42.819 15:03:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:42.819 15:03:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:42.819 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:42.819 15:03:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:42.819 15:03:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:42.819 15:03:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.819 15:03:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:42.819 15:03:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.819 15:03:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:42.819 Found net devices under 0000:af:00.0: cvl_0_0 00:17:42.819 15:03:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.819 15:03:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:42.819 15:03:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.819 15:03:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:42.819 15:03:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.819 15:03:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:42.819 Found net devices under 0000:af:00.1: cvl_0_1 00:17:42.819 15:03:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.819 15:03:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:42.819 15:03:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:42.819 15:03:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:42.819 15:03:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:42.819 15:03:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.819 15:03:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.819 15:03:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.819 15:03:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:42.819 15:03:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.819 15:03:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.819 15:03:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:42.819 15:03:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.819 15:03:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.819 15:03:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:42.819 15:03:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:42.819 15:03:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.819 15:03:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.819 15:03:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.819 15:03:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.819 15:03:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:42.819 15:03:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.819 15:03:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.819 15:03:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.819 15:03:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:42.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:17:42.819 00:17:42.819 --- 10.0.0.2 ping statistics --- 00:17:42.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.819 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:17:42.819 15:03:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:17:42.819 00:17:42.819 --- 10.0.0.1 ping statistics --- 00:17:42.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.819 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:17:42.819 15:03:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.819 15:03:01 -- nvmf/common.sh@410 -- # return 0 00:17:42.819 15:03:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:42.819 15:03:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.819 15:03:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:42.820 15:03:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:42.820 15:03:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.820 15:03:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:42.820 15:03:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:42.820 15:03:01 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:42.820 15:03:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:42.820 15:03:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:42.820 15:03:01 -- common/autotest_common.sh@10 -- # set +x 00:17:42.820 15:03:01 -- nvmf/common.sh@469 -- # nvmfpid=3259750 00:17:42.820 15:03:01 -- nvmf/common.sh@470 -- # waitforlisten 3259750 00:17:42.820 15:03:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:42.820 15:03:01 -- common/autotest_common.sh@819 -- # '[' -z 3259750 ']' 00:17:42.820 15:03:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.820 15:03:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:42.820 15:03:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.820 15:03:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:42.820 15:03:01 -- common/autotest_common.sh@10 -- # set +x 00:17:42.820 [2024-06-11 15:03:01.306261] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:42.820 [2024-06-11 15:03:01.306316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.820 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.820 [2024-06-11 15:03:01.401181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:42.820 [2024-06-11 15:03:01.497293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:42.820 [2024-06-11 15:03:01.497429] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.820 [2024-06-11 15:03:01.497442] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.820 [2024-06-11 15:03:01.497451] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.820 [2024-06-11 15:03:01.497501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.820 [2024-06-11 15:03:01.497605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.820 [2024-06-11 15:03:01.497710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.820 [2024-06-11 15:03:01.497710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.388 15:03:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.388 15:03:02 -- common/autotest_common.sh@852 -- # return 0 00:17:43.388 15:03:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:43.388 15:03:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:43.388 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.388 15:03:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.388 15:03:02 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:43.388 15:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.388 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.388 15:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.388 15:03:02 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:43.388 15:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.388 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.647 15:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.647 15:03:02 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:43.647 15:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.647 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.647 [2024-06-11 15:03:02.275722] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.647 15:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.647 15:03:02 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:43.647 15:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.647 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.647 Malloc0 00:17:43.647 15:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:43.648 15:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.648 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.648 15:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.648 15:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.648 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.648 15:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.648 15:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.648 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.648 [2024-06-11 15:03:02.345427] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.648 15:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3260092 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@30 -- # READ_PID=3260094 00:17:43.648 15:03:02 -- nvmf/common.sh@520 -- # config=() 00:17:43.648 15:03:02 -- nvmf/common.sh@520 -- # local subsystem config 00:17:43.648 15:03:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:43.648 15:03:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:43.648 { 00:17:43.648 "params": { 00:17:43.648 "name": "Nvme$subsystem", 00:17:43.648 "trtype": "$TEST_TRANSPORT", 00:17:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.648 "adrfam": "ipv4", 00:17:43.648 "trsvcid": "$NVMF_PORT", 00:17:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.648 "hdgst": ${hdgst:-false}, 00:17:43.648 "ddgst": ${ddgst:-false} 00:17:43.648 }, 00:17:43.648 "method": "bdev_nvme_attach_controller" 00:17:43.648 } 00:17:43.648 EOF 00:17:43.648 )") 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3260096 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:43.648 15:03:02 -- nvmf/common.sh@520 -- # config=() 00:17:43.648 15:03:02 -- nvmf/common.sh@520 -- # local subsystem config 00:17:43.648 15:03:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:43.648 15:03:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:43.648 { 00:17:43.648 "params": { 00:17:43.648 "name": "Nvme$subsystem", 00:17:43.648 "trtype": "$TEST_TRANSPORT", 00:17:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.648 "adrfam": "ipv4", 00:17:43.648 "trsvcid": "$NVMF_PORT", 00:17:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.648 "hdgst": ${hdgst:-false}, 00:17:43.648 "ddgst": ${ddgst:-false} 00:17:43.648 }, 00:17:43.648 "method": "bdev_nvme_attach_controller" 00:17:43.648 } 00:17:43.648 EOF 00:17:43.648 )") 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3260099 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@35 -- # sync 00:17:43.648 15:03:02 -- nvmf/common.sh@542 -- # cat 00:17:43.648 15:03:02 -- nvmf/common.sh@520 -- # config=() 00:17:43.648 15:03:02 -- nvmf/common.sh@520 -- # local subsystem config 00:17:43.648 15:03:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:43.648 15:03:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:43.648 { 00:17:43.648 "params": { 00:17:43.648 "name": "Nvme$subsystem", 00:17:43.648 "trtype": "$TEST_TRANSPORT", 00:17:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.648 "adrfam": "ipv4", 00:17:43.648 "trsvcid": "$NVMF_PORT", 00:17:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.648 "hdgst": ${hdgst:-false}, 00:17:43.648 "ddgst": ${ddgst:-false} 00:17:43.648 }, 00:17:43.648 "method": "bdev_nvme_attach_controller" 00:17:43.648 } 00:17:43.648 EOF 00:17:43.648 )") 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:43.648 15:03:02 -- nvmf/common.sh@520 -- # config=() 00:17:43.648 15:03:02 -- nvmf/common.sh@542 -- # cat 00:17:43.648 15:03:02 -- nvmf/common.sh@520 -- # local subsystem config 00:17:43.648 15:03:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:43.648 15:03:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:43.648 { 00:17:43.648 "params": { 00:17:43.648 "name": "Nvme$subsystem", 00:17:43.648 "trtype": "$TEST_TRANSPORT", 00:17:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.648 "adrfam": "ipv4", 00:17:43.648 "trsvcid": "$NVMF_PORT", 00:17:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.648 "hdgst": ${hdgst:-false}, 00:17:43.648 "ddgst": ${ddgst:-false} 00:17:43.648 }, 00:17:43.648 "method": "bdev_nvme_attach_controller" 00:17:43.648 } 00:17:43.648 EOF 00:17:43.648 )") 00:17:43.648 15:03:02 -- nvmf/common.sh@542 -- # cat 00:17:43.648 15:03:02 -- target/bdev_io_wait.sh@37 -- # wait 3260092 00:17:43.648 15:03:02 -- nvmf/common.sh@542 -- # cat 00:17:43.648 15:03:02 -- nvmf/common.sh@544 -- # jq . 00:17:43.648 15:03:02 -- nvmf/common.sh@544 -- # jq . 00:17:43.648 15:03:02 -- nvmf/common.sh@545 -- # IFS=, 00:17:43.648 15:03:02 -- nvmf/common.sh@544 -- # jq . 00:17:43.648 15:03:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:43.648 "params": { 00:17:43.648 "name": "Nvme1", 00:17:43.648 "trtype": "tcp", 00:17:43.648 "traddr": "10.0.0.2", 00:17:43.648 "adrfam": "ipv4", 00:17:43.648 "trsvcid": "4420", 00:17:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.648 "hdgst": false, 00:17:43.648 "ddgst": false 00:17:43.648 }, 00:17:43.648 "method": "bdev_nvme_attach_controller" 00:17:43.648 }' 00:17:43.648 15:03:02 -- nvmf/common.sh@544 -- # jq . 00:17:43.648 15:03:02 -- nvmf/common.sh@545 -- # IFS=, 00:17:43.648 15:03:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:43.648 "params": { 00:17:43.648 "name": "Nvme1", 00:17:43.648 "trtype": "tcp", 00:17:43.648 "traddr": "10.0.0.2", 00:17:43.648 "adrfam": "ipv4", 00:17:43.648 "trsvcid": "4420", 00:17:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.648 "hdgst": false, 00:17:43.648 "ddgst": false 00:17:43.648 }, 00:17:43.648 "method": "bdev_nvme_attach_controller" 00:17:43.648 }' 00:17:43.648 15:03:02 -- nvmf/common.sh@545 -- # IFS=, 00:17:43.648 15:03:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:43.648 "params": { 00:17:43.648 "name": "Nvme1", 00:17:43.648 "trtype": "tcp", 00:17:43.648 "traddr": "10.0.0.2", 00:17:43.648 "adrfam": "ipv4", 00:17:43.648 "trsvcid": "4420", 00:17:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.648 "hdgst": false, 00:17:43.648 "ddgst": false 00:17:43.648 }, 00:17:43.648 "method": "bdev_nvme_attach_controller" 00:17:43.648 }' 00:17:43.648 15:03:02 -- nvmf/common.sh@545 -- # IFS=, 00:17:43.648 15:03:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:43.648 "params": { 00:17:43.648 "name": "Nvme1", 00:17:43.648 "trtype": "tcp", 00:17:43.648 "traddr": "10.0.0.2", 00:17:43.648 "adrfam": "ipv4", 00:17:43.648 "trsvcid": "4420", 00:17:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.648 "hdgst": false, 00:17:43.648 "ddgst": false 00:17:43.648 }, 00:17:43.648 "method": "bdev_nvme_attach_controller" 00:17:43.648 }' 00:17:43.648 [2024-06-11 15:03:02.393943] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:43.648 [2024-06-11 15:03:02.393990] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:43.648 [2024-06-11 15:03:02.396000] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:43.648 [2024-06-11 15:03:02.396063] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:43.648 [2024-06-11 15:03:02.398718] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:43.648 [2024-06-11 15:03:02.398775] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:43.648 [2024-06-11 15:03:02.398767] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:43.648 [2024-06-11 15:03:02.398819] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:43.648 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.907 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.907 [2024-06-11 15:03:02.574296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.907 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.907 [2024-06-11 15:03:02.660327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:43.907 [2024-06-11 15:03:02.665700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.907 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.165 [2024-06-11 15:03:02.751021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:44.165 [2024-06-11 15:03:02.760310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.165 [2024-06-11 15:03:02.821604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.165 [2024-06-11 15:03:02.866036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:44.165 [2024-06-11 15:03:02.905163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:44.424 Running I/O for 1 seconds... 00:17:44.424 Running I/O for 1 seconds... 00:17:44.424 Running I/O for 1 seconds... 00:17:44.424 Running I/O for 1 seconds... 00:17:45.361 00:17:45.361 Latency(us) 00:17:45.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.361 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:45.361 Nvme1n1 : 1.00 167012.21 652.39 0.00 0.00 763.44 305.34 886.23 00:17:45.361 =================================================================================================================== 00:17:45.362 Total : 167012.21 652.39 0.00 0.00 763.44 305.34 886.23 00:17:45.362 00:17:45.362 Latency(us) 00:17:45.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.362 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:45.362 Nvme1n1 : 1.01 8853.37 34.58 0.00 0.00 14399.47 7149.38 22878.02 00:17:45.362 =================================================================================================================== 00:17:45.362 Total : 8853.37 34.58 0.00 0.00 14399.47 7149.38 22878.02 00:17:45.362 00:17:45.362 Latency(us) 00:17:45.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.362 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:45.362 Nvme1n1 : 1.01 8595.36 33.58 0.00 0.00 14829.49 7864.32 27286.81 00:17:45.362 =================================================================================================================== 00:17:45.362 Total : 8595.36 33.58 0.00 0.00 14829.49 7864.32 27286.81 00:17:45.362 00:17:45.362 Latency(us) 00:17:45.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.362 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:45.362 Nvme1n1 : 1.01 7768.63 30.35 0.00 0.00 16409.67 7923.90 30980.65 00:17:45.362 =================================================================================================================== 00:17:45.362 Total : 7768.63 30.35 0.00 0.00 16409.67 7923.90 30980.65 00:17:45.621 15:03:04 -- target/bdev_io_wait.sh@38 -- # wait 3260094 00:17:45.621 15:03:04 -- target/bdev_io_wait.sh@39 -- # wait 3260096 00:17:45.621 15:03:04 -- target/bdev_io_wait.sh@40 -- # wait 3260099 00:17:45.621 15:03:04 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.621 15:03:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.621 15:03:04 -- common/autotest_common.sh@10 -- # set +x 00:17:45.621 15:03:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.621 15:03:04 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:45.621 15:03:04 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:45.621 15:03:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:45.621 15:03:04 -- nvmf/common.sh@116 -- # sync 00:17:45.621 15:03:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:45.621 15:03:04 -- nvmf/common.sh@119 -- # set +e 00:17:45.621 15:03:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:45.621 15:03:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:45.621 rmmod nvme_tcp 00:17:45.880 rmmod nvme_fabrics 00:17:45.880 rmmod nvme_keyring 00:17:45.880 15:03:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:45.880 15:03:04 -- nvmf/common.sh@123 -- # set -e 00:17:45.880 15:03:04 -- nvmf/common.sh@124 -- # return 0 00:17:45.880 15:03:04 -- nvmf/common.sh@477 -- # '[' -n 3259750 ']' 00:17:45.880 15:03:04 -- nvmf/common.sh@478 -- # killprocess 3259750 00:17:45.880 15:03:04 -- common/autotest_common.sh@926 -- # '[' -z 3259750 ']' 00:17:45.880 15:03:04 -- common/autotest_common.sh@930 -- # kill -0 3259750 00:17:45.880 15:03:04 -- common/autotest_common.sh@931 -- # uname 00:17:45.880 15:03:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:45.880 15:03:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3259750 00:17:45.880 15:03:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:45.880 15:03:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:45.880 15:03:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3259750' 00:17:45.880 killing process with pid 3259750 00:17:45.880 15:03:04 -- common/autotest_common.sh@945 -- # kill 3259750 00:17:45.880 15:03:04 -- common/autotest_common.sh@950 -- # wait 3259750 00:17:46.139 15:03:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:46.139 15:03:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:46.139 15:03:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:46.139 15:03:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.139 15:03:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:46.139 15:03:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.139 15:03:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.139 15:03:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.046 15:03:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:48.046 00:17:48.046 real 0m12.275s 00:17:48.046 user 0m20.614s 00:17:48.046 sys 0m6.705s 00:17:48.046 15:03:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.046 15:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:48.046 ************************************ 00:17:48.046 END TEST nvmf_bdev_io_wait 00:17:48.046 ************************************ 00:17:48.046 15:03:06 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:48.046 15:03:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:48.046 15:03:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:48.046 15:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:48.305 ************************************ 00:17:48.305 START TEST nvmf_queue_depth 00:17:48.305 ************************************ 00:17:48.305 15:03:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:48.305 * Looking for test storage... 00:17:48.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.305 15:03:06 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.305 15:03:06 -- nvmf/common.sh@7 -- # uname -s 00:17:48.305 15:03:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.305 15:03:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.305 15:03:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.305 15:03:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.305 15:03:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.305 15:03:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.305 15:03:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.305 15:03:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.305 15:03:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.305 15:03:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.305 15:03:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:48.305 15:03:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:48.305 15:03:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.305 15:03:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.305 15:03:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.305 15:03:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.305 15:03:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.305 15:03:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.305 15:03:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.305 15:03:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.305 15:03:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.305 15:03:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.305 15:03:06 -- paths/export.sh@5 -- # export PATH 00:17:48.305 15:03:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.305 15:03:06 -- nvmf/common.sh@46 -- # : 0 00:17:48.305 15:03:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:48.305 15:03:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:48.305 15:03:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:48.305 15:03:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.305 15:03:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.305 15:03:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:48.305 15:03:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:48.305 15:03:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:48.305 15:03:07 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:48.305 15:03:07 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:48.305 15:03:07 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.305 15:03:07 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:48.305 15:03:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:48.305 15:03:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.305 15:03:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:48.305 15:03:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:48.305 15:03:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:48.305 15:03:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.305 15:03:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.305 15:03:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.305 15:03:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:48.305 15:03:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:48.305 15:03:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:48.305 15:03:07 -- common/autotest_common.sh@10 -- # set +x 00:17:54.903 15:03:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:54.903 15:03:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:54.903 15:03:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:54.903 15:03:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:54.903 15:03:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:54.903 15:03:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:54.903 15:03:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:54.903 15:03:13 -- nvmf/common.sh@294 -- # net_devs=() 00:17:54.903 15:03:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:54.903 15:03:13 -- nvmf/common.sh@295 -- # e810=() 00:17:54.903 15:03:13 -- nvmf/common.sh@295 -- # local -ga e810 00:17:54.903 15:03:13 -- nvmf/common.sh@296 -- # x722=() 00:17:54.903 15:03:13 -- nvmf/common.sh@296 -- # local -ga x722 00:17:54.903 15:03:13 -- nvmf/common.sh@297 -- # mlx=() 00:17:54.903 15:03:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:54.903 15:03:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.903 15:03:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:54.903 15:03:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:54.903 15:03:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:54.903 15:03:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:54.903 15:03:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:54.903 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:54.903 15:03:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:54.903 15:03:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:54.903 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:54.903 15:03:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:54.903 15:03:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:54.903 15:03:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.903 15:03:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:54.903 15:03:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.903 15:03:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:54.903 Found net devices under 0000:af:00.0: cvl_0_0 00:17:54.903 15:03:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.903 15:03:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:54.903 15:03:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.903 15:03:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:54.903 15:03:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.903 15:03:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:54.903 Found net devices under 0000:af:00.1: cvl_0_1 00:17:54.903 15:03:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.903 15:03:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:54.903 15:03:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:54.903 15:03:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:54.903 15:03:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.903 15:03:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.903 15:03:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.903 15:03:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:54.903 15:03:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.903 15:03:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.903 15:03:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:54.903 15:03:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.903 15:03:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.903 15:03:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:54.903 15:03:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:54.903 15:03:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.903 15:03:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.903 15:03:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.903 15:03:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.903 15:03:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:54.903 15:03:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.903 15:03:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.903 15:03:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.903 15:03:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:54.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:17:54.903 00:17:54.903 --- 10.0.0.2 ping statistics --- 00:17:54.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.903 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:54.903 15:03:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:17:54.903 00:17:54.903 --- 10.0.0.1 ping statistics --- 00:17:54.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.903 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:54.903 15:03:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.903 15:03:13 -- nvmf/common.sh@410 -- # return 0 00:17:54.903 15:03:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:54.903 15:03:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.903 15:03:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:54.903 15:03:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.903 15:03:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:54.903 15:03:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:54.903 15:03:13 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:54.903 15:03:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:54.904 15:03:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:54.904 15:03:13 -- common/autotest_common.sh@10 -- # set +x 00:17:54.904 15:03:13 -- nvmf/common.sh@469 -- # nvmfpid=3264838 00:17:54.904 15:03:13 -- nvmf/common.sh@470 -- # waitforlisten 3264838 00:17:54.904 15:03:13 -- common/autotest_common.sh@819 -- # '[' -z 3264838 ']' 00:17:54.904 15:03:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.904 15:03:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:54.904 15:03:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.904 15:03:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:54.904 15:03:13 -- common/autotest_common.sh@10 -- # set +x 00:17:54.904 15:03:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:54.904 [2024-06-11 15:03:13.408949] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:54.904 [2024-06-11 15:03:13.409004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.904 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.904 [2024-06-11 15:03:13.495660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.904 [2024-06-11 15:03:13.582547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:54.904 [2024-06-11 15:03:13.582690] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.904 [2024-06-11 15:03:13.582702] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.904 [2024-06-11 15:03:13.582711] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.904 [2024-06-11 15:03:13.582734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.842 15:03:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:55.842 15:03:14 -- common/autotest_common.sh@852 -- # return 0 00:17:55.842 15:03:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:55.842 15:03:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:55.842 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:55.842 15:03:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.842 15:03:14 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:55.842 15:03:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:55.842 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:55.842 [2024-06-11 15:03:14.363412] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.842 15:03:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:55.842 15:03:14 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:55.842 15:03:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:55.842 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:55.842 Malloc0 00:17:55.842 15:03:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:55.842 15:03:14 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:55.842 15:03:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:55.842 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:55.842 15:03:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:55.842 15:03:14 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:55.842 15:03:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:55.842 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:55.842 15:03:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:55.842 15:03:14 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.842 15:03:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:55.842 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:55.842 [2024-06-11 15:03:14.415944] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.842 15:03:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:55.842 15:03:14 -- target/queue_depth.sh@30 -- # bdevperf_pid=3265091 00:17:55.842 15:03:14 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.842 15:03:14 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:55.842 15:03:14 -- target/queue_depth.sh@33 -- # waitforlisten 3265091 /var/tmp/bdevperf.sock 00:17:55.842 15:03:14 -- common/autotest_common.sh@819 -- # '[' -z 3265091 ']' 00:17:55.842 15:03:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.842 15:03:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:55.842 15:03:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.842 15:03:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:55.842 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:55.842 [2024-06-11 15:03:14.466141] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:55.842 [2024-06-11 15:03:14.466194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265091 ] 00:17:55.842 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.842 [2024-06-11 15:03:14.554476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.842 [2024-06-11 15:03:14.638649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.779 15:03:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:56.779 15:03:15 -- common/autotest_common.sh@852 -- # return 0 00:17:56.779 15:03:15 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:56.779 15:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.779 15:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:56.779 NVMe0n1 00:17:56.779 15:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.779 15:03:15 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.779 Running I/O for 10 seconds... 00:18:08.988 00:18:08.988 Latency(us) 00:18:08.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.988 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:08.988 Verification LBA range: start 0x0 length 0x4000 00:18:08.988 NVMe0n1 : 10.07 12069.75 47.15 0.00 0.00 84484.75 17039.36 63867.81 00:18:08.988 =================================================================================================================== 00:18:08.988 Total : 12069.75 47.15 0.00 0.00 84484.75 17039.36 63867.81 00:18:08.988 0 00:18:08.988 15:03:25 -- target/queue_depth.sh@39 -- # killprocess 3265091 00:18:08.988 15:03:25 -- common/autotest_common.sh@926 -- # '[' -z 3265091 ']' 00:18:08.988 15:03:25 -- common/autotest_common.sh@930 -- # kill -0 3265091 00:18:08.988 15:03:25 -- common/autotest_common.sh@931 -- # uname 00:18:08.988 15:03:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:08.988 15:03:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3265091 00:18:08.988 15:03:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:08.988 15:03:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:08.988 15:03:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3265091' 00:18:08.988 killing process with pid 3265091 00:18:08.988 15:03:25 -- common/autotest_common.sh@945 -- # kill 3265091 00:18:08.988 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.988 00:18:08.988 Latency(us) 00:18:08.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.988 =================================================================================================================== 00:18:08.988 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.988 15:03:25 -- common/autotest_common.sh@950 -- # wait 3265091 00:18:08.988 15:03:26 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:08.988 15:03:26 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:08.988 15:03:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:08.988 15:03:26 -- nvmf/common.sh@116 -- # sync 00:18:08.988 15:03:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:08.988 15:03:26 -- nvmf/common.sh@119 -- # set +e 00:18:08.988 15:03:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:08.988 15:03:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:08.988 rmmod nvme_tcp 00:18:08.988 rmmod nvme_fabrics 00:18:08.988 rmmod nvme_keyring 00:18:08.988 15:03:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:08.988 15:03:26 -- nvmf/common.sh@123 -- # set -e 00:18:08.988 15:03:26 -- nvmf/common.sh@124 -- # return 0 00:18:08.988 15:03:26 -- nvmf/common.sh@477 -- # '[' -n 3264838 ']' 00:18:08.988 15:03:26 -- nvmf/common.sh@478 -- # killprocess 3264838 00:18:08.988 15:03:26 -- common/autotest_common.sh@926 -- # '[' -z 3264838 ']' 00:18:08.988 15:03:26 -- common/autotest_common.sh@930 -- # kill -0 3264838 00:18:08.988 15:03:26 -- common/autotest_common.sh@931 -- # uname 00:18:08.988 15:03:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:08.988 15:03:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3264838 00:18:08.988 15:03:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:08.988 15:03:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:08.988 15:03:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3264838' 00:18:08.988 killing process with pid 3264838 00:18:08.988 15:03:26 -- common/autotest_common.sh@945 -- # kill 3264838 00:18:08.988 15:03:26 -- common/autotest_common.sh@950 -- # wait 3264838 00:18:08.988 15:03:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:08.988 15:03:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:08.988 15:03:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:08.988 15:03:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.988 15:03:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:08.988 15:03:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.988 15:03:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.988 15:03:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.925 15:03:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:09.925 00:18:09.925 real 0m21.533s 00:18:09.925 user 0m25.802s 00:18:09.925 sys 0m6.333s 00:18:09.925 15:03:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.925 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:18:09.925 ************************************ 00:18:09.925 END TEST nvmf_queue_depth 00:18:09.925 ************************************ 00:18:09.925 15:03:28 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:09.925 15:03:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:09.925 15:03:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:09.925 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:18:09.925 ************************************ 00:18:09.925 START TEST nvmf_multipath 00:18:09.925 ************************************ 00:18:09.925 15:03:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:09.925 * Looking for test storage... 00:18:09.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.925 15:03:28 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.925 15:03:28 -- nvmf/common.sh@7 -- # uname -s 00:18:09.925 15:03:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.925 15:03:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.925 15:03:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.925 15:03:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.925 15:03:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.925 15:03:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.925 15:03:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.925 15:03:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.925 15:03:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.925 15:03:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.925 15:03:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:09.925 15:03:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:09.925 15:03:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.925 15:03:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.925 15:03:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.925 15:03:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.925 15:03:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.925 15:03:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.925 15:03:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.925 15:03:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.925 15:03:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.925 15:03:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.925 15:03:28 -- paths/export.sh@5 -- # export PATH 00:18:09.925 15:03:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.925 15:03:28 -- nvmf/common.sh@46 -- # : 0 00:18:09.925 15:03:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:09.925 15:03:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:09.925 15:03:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:09.925 15:03:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.925 15:03:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.925 15:03:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:09.925 15:03:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:09.925 15:03:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:09.925 15:03:28 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.925 15:03:28 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.925 15:03:28 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:09.925 15:03:28 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.925 15:03:28 -- target/multipath.sh@43 -- # nvmftestinit 00:18:09.925 15:03:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:09.925 15:03:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.925 15:03:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:09.925 15:03:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:09.925 15:03:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:09.925 15:03:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.925 15:03:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.925 15:03:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.925 15:03:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:09.925 15:03:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:09.925 15:03:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:09.925 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:18:16.497 15:03:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:16.497 15:03:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:16.497 15:03:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:16.497 15:03:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:16.497 15:03:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:16.497 15:03:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:16.497 15:03:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:16.497 15:03:34 -- nvmf/common.sh@294 -- # net_devs=() 00:18:16.497 15:03:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:16.497 15:03:34 -- nvmf/common.sh@295 -- # e810=() 00:18:16.497 15:03:34 -- nvmf/common.sh@295 -- # local -ga e810 00:18:16.497 15:03:34 -- nvmf/common.sh@296 -- # x722=() 00:18:16.497 15:03:34 -- nvmf/common.sh@296 -- # local -ga x722 00:18:16.497 15:03:34 -- nvmf/common.sh@297 -- # mlx=() 00:18:16.497 15:03:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:16.497 15:03:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.497 15:03:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:16.497 15:03:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:16.497 15:03:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:16.497 15:03:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:16.497 15:03:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:16.497 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:16.497 15:03:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:16.497 15:03:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:16.497 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:16.497 15:03:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:16.497 15:03:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:16.497 15:03:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.497 15:03:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:16.497 15:03:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.497 15:03:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:16.497 Found net devices under 0000:af:00.0: cvl_0_0 00:18:16.497 15:03:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.497 15:03:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:16.497 15:03:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.497 15:03:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:16.497 15:03:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.497 15:03:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:16.497 Found net devices under 0000:af:00.1: cvl_0_1 00:18:16.497 15:03:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.497 15:03:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:16.497 15:03:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:16.497 15:03:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:16.497 15:03:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.497 15:03:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.497 15:03:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.497 15:03:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:16.497 15:03:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.497 15:03:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.497 15:03:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:16.497 15:03:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.497 15:03:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.497 15:03:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:16.497 15:03:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:16.497 15:03:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.497 15:03:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.497 15:03:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.497 15:03:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.497 15:03:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:16.497 15:03:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.497 15:03:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.497 15:03:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.497 15:03:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:16.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:18:16.497 00:18:16.497 --- 10.0.0.2 ping statistics --- 00:18:16.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.497 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:16.497 15:03:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:18:16.497 00:18:16.497 --- 10.0.0.1 ping statistics --- 00:18:16.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.497 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:18:16.497 15:03:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.497 15:03:34 -- nvmf/common.sh@410 -- # return 0 00:18:16.497 15:03:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:16.497 15:03:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.497 15:03:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.497 15:03:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.497 15:03:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.497 15:03:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.497 15:03:34 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:16.497 15:03:34 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:16.497 only one NIC for nvmf test 00:18:16.497 15:03:34 -- target/multipath.sh@47 -- # nvmftestfini 00:18:16.497 15:03:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:16.497 15:03:34 -- nvmf/common.sh@116 -- # sync 00:18:16.497 15:03:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:16.497 15:03:34 -- nvmf/common.sh@119 -- # set +e 00:18:16.498 15:03:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:16.498 15:03:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:16.498 rmmod nvme_tcp 00:18:16.498 rmmod nvme_fabrics 00:18:16.498 rmmod nvme_keyring 00:18:16.498 15:03:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:16.498 15:03:34 -- nvmf/common.sh@123 -- # set -e 00:18:16.498 15:03:34 -- nvmf/common.sh@124 -- # return 0 00:18:16.498 15:03:34 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:16.498 15:03:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:16.498 15:03:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:16.498 15:03:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:16.498 15:03:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.498 15:03:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:16.498 15:03:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.498 15:03:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.498 15:03:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.403 15:03:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:18.403 15:03:36 -- target/multipath.sh@48 -- # exit 0 00:18:18.403 15:03:36 -- target/multipath.sh@1 -- # nvmftestfini 00:18:18.403 15:03:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:18.403 15:03:36 -- nvmf/common.sh@116 -- # sync 00:18:18.403 15:03:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:18.403 15:03:36 -- nvmf/common.sh@119 -- # set +e 00:18:18.403 15:03:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:18.403 15:03:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:18.403 15:03:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:18.403 15:03:36 -- nvmf/common.sh@123 -- # set -e 00:18:18.403 15:03:36 -- nvmf/common.sh@124 -- # return 0 00:18:18.403 15:03:36 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:18.403 15:03:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:18.403 15:03:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:18.403 15:03:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:18.403 15:03:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.403 15:03:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:18.403 15:03:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.403 15:03:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.403 15:03:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.403 15:03:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:18.403 00:18:18.403 real 0m8.533s 00:18:18.403 user 0m1.775s 00:18:18.403 sys 0m4.751s 00:18:18.403 15:03:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.403 15:03:36 -- common/autotest_common.sh@10 -- # set +x 00:18:18.403 ************************************ 00:18:18.403 END TEST nvmf_multipath 00:18:18.403 ************************************ 00:18:18.403 15:03:37 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:18.403 15:03:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:18.403 15:03:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:18.403 15:03:37 -- common/autotest_common.sh@10 -- # set +x 00:18:18.403 ************************************ 00:18:18.403 START TEST nvmf_zcopy 00:18:18.403 ************************************ 00:18:18.403 15:03:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:18.403 * Looking for test storage... 00:18:18.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.403 15:03:37 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.403 15:03:37 -- nvmf/common.sh@7 -- # uname -s 00:18:18.403 15:03:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.403 15:03:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.403 15:03:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.403 15:03:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.403 15:03:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.404 15:03:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.404 15:03:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.404 15:03:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.404 15:03:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.404 15:03:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.404 15:03:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:18.404 15:03:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:18.404 15:03:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.404 15:03:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.404 15:03:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.404 15:03:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.404 15:03:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.404 15:03:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.404 15:03:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.404 15:03:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.404 15:03:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.404 15:03:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.404 15:03:37 -- paths/export.sh@5 -- # export PATH 00:18:18.404 15:03:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.404 15:03:37 -- nvmf/common.sh@46 -- # : 0 00:18:18.404 15:03:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:18.404 15:03:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:18.404 15:03:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:18.404 15:03:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.404 15:03:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.404 15:03:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:18.404 15:03:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:18.404 15:03:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:18.404 15:03:37 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:18.404 15:03:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:18.404 15:03:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.404 15:03:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:18.404 15:03:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:18.404 15:03:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:18.404 15:03:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.404 15:03:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.404 15:03:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.404 15:03:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:18.404 15:03:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:18.404 15:03:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:18.404 15:03:37 -- common/autotest_common.sh@10 -- # set +x 00:18:24.973 15:03:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:24.973 15:03:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:24.974 15:03:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:24.974 15:03:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:24.974 15:03:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:24.974 15:03:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:24.974 15:03:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:24.974 15:03:43 -- nvmf/common.sh@294 -- # net_devs=() 00:18:24.974 15:03:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:24.974 15:03:43 -- nvmf/common.sh@295 -- # e810=() 00:18:24.974 15:03:43 -- nvmf/common.sh@295 -- # local -ga e810 00:18:24.974 15:03:43 -- nvmf/common.sh@296 -- # x722=() 00:18:24.974 15:03:43 -- nvmf/common.sh@296 -- # local -ga x722 00:18:24.974 15:03:43 -- nvmf/common.sh@297 -- # mlx=() 00:18:24.974 15:03:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:24.974 15:03:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.974 15:03:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:24.974 15:03:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:24.974 15:03:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:24.974 15:03:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:24.974 15:03:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:24.974 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:24.974 15:03:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:24.974 15:03:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:24.974 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:24.974 15:03:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:24.974 15:03:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:24.974 15:03:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.974 15:03:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:24.974 15:03:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.974 15:03:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:24.974 Found net devices under 0000:af:00.0: cvl_0_0 00:18:24.974 15:03:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.974 15:03:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:24.974 15:03:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.974 15:03:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:24.974 15:03:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.974 15:03:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:24.974 Found net devices under 0000:af:00.1: cvl_0_1 00:18:24.974 15:03:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.974 15:03:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:24.974 15:03:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:24.974 15:03:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:24.974 15:03:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.974 15:03:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.974 15:03:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:24.974 15:03:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:24.974 15:03:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:24.974 15:03:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:24.974 15:03:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:24.974 15:03:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:24.974 15:03:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.974 15:03:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:24.974 15:03:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:24.974 15:03:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:24.974 15:03:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:24.974 15:03:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:24.974 15:03:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:24.974 15:03:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:24.974 15:03:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:24.974 15:03:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:24.974 15:03:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:24.974 15:03:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:24.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:18:24.974 00:18:24.974 --- 10.0.0.2 ping statistics --- 00:18:24.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.974 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:18:24.974 15:03:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:24.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:18:24.974 00:18:24.974 --- 10.0.0.1 ping statistics --- 00:18:24.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.974 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:18:24.974 15:03:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.974 15:03:43 -- nvmf/common.sh@410 -- # return 0 00:18:24.974 15:03:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:24.974 15:03:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.974 15:03:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:24.974 15:03:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.974 15:03:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:24.974 15:03:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:24.974 15:03:43 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:24.974 15:03:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:24.974 15:03:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:24.974 15:03:43 -- common/autotest_common.sh@10 -- # set +x 00:18:24.974 15:03:43 -- nvmf/common.sh@469 -- # nvmfpid=3275255 00:18:24.974 15:03:43 -- nvmf/common.sh@470 -- # waitforlisten 3275255 00:18:24.974 15:03:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.974 15:03:43 -- common/autotest_common.sh@819 -- # '[' -z 3275255 ']' 00:18:24.974 15:03:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.974 15:03:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:24.974 15:03:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.974 15:03:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:24.974 15:03:43 -- common/autotest_common.sh@10 -- # set +x 00:18:24.974 [2024-06-11 15:03:43.630805] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:24.974 [2024-06-11 15:03:43.630862] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.974 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.974 [2024-06-11 15:03:43.717739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.974 [2024-06-11 15:03:43.803981] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:24.974 [2024-06-11 15:03:43.804127] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.974 [2024-06-11 15:03:43.804139] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.975 [2024-06-11 15:03:43.804148] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.975 [2024-06-11 15:03:43.804169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.910 15:03:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:25.910 15:03:44 -- common/autotest_common.sh@852 -- # return 0 00:18:25.910 15:03:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:25.910 15:03:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:25.910 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.910 15:03:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.910 15:03:44 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:25.911 15:03:44 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:25.911 15:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.911 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.911 [2024-06-11 15:03:44.597908] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.911 15:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.911 15:03:44 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:25.911 15:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.911 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.911 15:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.911 15:03:44 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.911 15:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.911 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.911 [2024-06-11 15:03:44.614054] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.911 15:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.911 15:03:44 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:25.911 15:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.911 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.911 15:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.911 15:03:44 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:25.911 15:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.911 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.911 malloc0 00:18:25.911 15:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.911 15:03:44 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:25.911 15:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.911 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.911 15:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.911 15:03:44 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:25.911 15:03:44 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:25.911 15:03:44 -- nvmf/common.sh@520 -- # config=() 00:18:25.911 15:03:44 -- nvmf/common.sh@520 -- # local subsystem config 00:18:25.911 15:03:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:25.911 15:03:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:25.911 { 00:18:25.911 "params": { 00:18:25.911 "name": "Nvme$subsystem", 00:18:25.911 "trtype": "$TEST_TRANSPORT", 00:18:25.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:25.911 "adrfam": "ipv4", 00:18:25.911 "trsvcid": "$NVMF_PORT", 00:18:25.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:25.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:25.911 "hdgst": ${hdgst:-false}, 00:18:25.911 "ddgst": ${ddgst:-false} 00:18:25.911 }, 00:18:25.911 "method": "bdev_nvme_attach_controller" 00:18:25.911 } 00:18:25.911 EOF 00:18:25.911 )") 00:18:25.911 15:03:44 -- nvmf/common.sh@542 -- # cat 00:18:25.911 15:03:44 -- nvmf/common.sh@544 -- # jq . 00:18:25.911 15:03:44 -- nvmf/common.sh@545 -- # IFS=, 00:18:25.911 15:03:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:25.911 "params": { 00:18:25.911 "name": "Nvme1", 00:18:25.911 "trtype": "tcp", 00:18:25.911 "traddr": "10.0.0.2", 00:18:25.911 "adrfam": "ipv4", 00:18:25.911 "trsvcid": "4420", 00:18:25.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.911 "hdgst": false, 00:18:25.911 "ddgst": false 00:18:25.911 }, 00:18:25.911 "method": "bdev_nvme_attach_controller" 00:18:25.911 }' 00:18:25.911 [2024-06-11 15:03:44.692363] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:25.911 [2024-06-11 15:03:44.692417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275383 ] 00:18:25.911 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.170 [2024-06-11 15:03:44.780442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.170 [2024-06-11 15:03:44.866242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.429 Running I/O for 10 seconds... 00:18:36.403 00:18:36.403 Latency(us) 00:18:36.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.403 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:36.403 Verification LBA range: start 0x0 length 0x1000 00:18:36.403 Nvme1n1 : 10.01 8594.46 67.14 0.00 0.00 14854.08 1437.32 20852.36 00:18:36.403 =================================================================================================================== 00:18:36.403 Total : 8594.46 67.14 0.00 0.00 14854.08 1437.32 20852.36 00:18:36.662 15:03:55 -- target/zcopy.sh@39 -- # perfpid=3277378 00:18:36.662 15:03:55 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:36.662 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.662 15:03:55 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:36.662 15:03:55 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:36.662 15:03:55 -- nvmf/common.sh@520 -- # config=() 00:18:36.662 15:03:55 -- nvmf/common.sh@520 -- # local subsystem config 00:18:36.662 15:03:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:36.662 15:03:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:36.662 { 00:18:36.662 "params": { 00:18:36.662 "name": "Nvme$subsystem", 00:18:36.662 "trtype": "$TEST_TRANSPORT", 00:18:36.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:36.662 "adrfam": "ipv4", 00:18:36.662 "trsvcid": "$NVMF_PORT", 00:18:36.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:36.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:36.662 "hdgst": ${hdgst:-false}, 00:18:36.662 "ddgst": ${ddgst:-false} 00:18:36.662 }, 00:18:36.662 "method": "bdev_nvme_attach_controller" 00:18:36.662 } 00:18:36.662 EOF 00:18:36.662 )") 00:18:36.662 [2024-06-11 15:03:55.332422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.662 [2024-06-11 15:03:55.332457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.662 15:03:55 -- nvmf/common.sh@542 -- # cat 00:18:36.662 15:03:55 -- nvmf/common.sh@544 -- # jq . 00:18:36.662 15:03:55 -- nvmf/common.sh@545 -- # IFS=, 00:18:36.662 15:03:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:36.662 "params": { 00:18:36.662 "name": "Nvme1", 00:18:36.662 "trtype": "tcp", 00:18:36.662 "traddr": "10.0.0.2", 00:18:36.662 "adrfam": "ipv4", 00:18:36.662 "trsvcid": "4420", 00:18:36.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.662 "hdgst": false, 00:18:36.662 "ddgst": false 00:18:36.662 }, 00:18:36.662 "method": "bdev_nvme_attach_controller" 00:18:36.662 }' 00:18:36.662 [2024-06-11 15:03:55.344424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.662 [2024-06-11 15:03:55.344440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.662 [2024-06-11 15:03:55.352442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.662 [2024-06-11 15:03:55.352456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.662 [2024-06-11 15:03:55.360466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.662 [2024-06-11 15:03:55.360479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.662 [2024-06-11 15:03:55.368488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.662 [2024-06-11 15:03:55.368501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.662 [2024-06-11 15:03:55.372273] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:36.662 [2024-06-11 15:03:55.372327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277378 ] 00:18:36.662 [2024-06-11 15:03:55.376512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.662 [2024-06-11 15:03:55.376525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.662 [2024-06-11 15:03:55.388546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.662 [2024-06-11 15:03:55.388559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.662 [2024-06-11 15:03:55.396566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.662 [2024-06-11 15:03:55.396578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.404592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.404604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.663 [2024-06-11 15:03:55.412613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.412626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.420635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.420648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.432672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.432685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.440694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.440707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.448714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.448728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.456738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.456752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.460129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.663 [2024-06-11 15:03:55.464758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.464771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.476790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.476811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.484811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.484824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.492833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.492846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.663 [2024-06-11 15:03:55.500855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.663 [2024-06-11 15:03:55.500868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.922 [2024-06-11 15:03:55.508881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.922 [2024-06-11 15:03:55.508900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.922 [2024-06-11 15:03:55.520915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.922 [2024-06-11 15:03:55.520928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.922 [2024-06-11 15:03:55.528938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.922 [2024-06-11 15:03:55.528953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.922 [2024-06-11 15:03:55.536959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.536971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.544979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.544991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.546858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.923 [2024-06-11 15:03:55.553003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.553017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.565049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.565070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.573065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.573079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.581087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.581101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.589104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.589117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.597126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.597138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.609164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.609179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.617184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.617197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.625209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.625222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.633233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.633246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.641271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.641294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.653305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.653323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.661326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.661345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.669349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.669367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.677378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.677399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.685396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.685412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.697689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.697711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.705450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.705465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 Running I/O for 5 seconds... 00:18:36.923 [2024-06-11 15:03:55.713470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.713483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.728208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.728232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.739293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.739317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.923 [2024-06-11 15:03:55.755782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.923 [2024-06-11 15:03:55.755805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.182 [2024-06-11 15:03:55.764952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.764974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.778453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.778476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.788906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.788930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.800004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.800034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.816731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.816755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.826735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.826758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.837555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.837582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.848481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.848504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.859165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.859189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.872155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.872179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.881822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.881845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.892857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.892879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.905680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.905703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.915193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.915217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.929886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.929909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.939457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.939480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.950612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.950635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.961327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.961349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.973931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.973954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.985743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.985765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:55.994882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:55.994904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:56.006345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:56.006367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.183 [2024-06-11 15:03:56.017165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.183 [2024-06-11 15:03:56.017188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.028126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.028149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.045610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.045634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.055988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.056016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.067194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.067217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.080090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.080112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.089979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.090002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.104505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.104528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.115043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.115065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.125770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.125793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.136652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.136675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.147540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.147562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.160535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.160558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.442 [2024-06-11 15:03:56.170911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.442 [2024-06-11 15:03:56.170934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.443 [2024-06-11 15:03:56.181717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.443 [2024-06-11 15:03:56.181740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.443 [2024-06-11 15:03:56.194405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.443 [2024-06-11 15:03:56.194429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.443 [2024-06-11 15:03:56.203455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.443 [2024-06-11 15:03:56.203478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.443 [2024-06-11 15:03:56.219181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.443 [2024-06-11 15:03:56.219204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.443 [2024-06-11 15:03:56.228533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.443 [2024-06-11 15:03:56.228556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.443 [2024-06-11 15:03:56.239969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.443 [2024-06-11 15:03:56.239991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.443 [2024-06-11 15:03:56.250842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.443 [2024-06-11 15:03:56.250865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.443 [2024-06-11 15:03:56.261779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.443 [2024-06-11 15:03:56.261801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.443 [2024-06-11 15:03:56.274381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.443 [2024-06-11 15:03:56.274408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.443 [2024-06-11 15:03:56.283378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.443 [2024-06-11 15:03:56.283400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.296753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.296775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.307015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.307046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.317493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.317516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.328239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.328263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.339325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.339348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.350279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.350303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.360965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.360987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.371798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.371822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.389288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.389311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.399733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.399757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.410710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.410734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.421411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.702 [2024-06-11 15:03:56.421435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.702 [2024-06-11 15:03:56.432573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.703 [2024-06-11 15:03:56.432597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.703 [2024-06-11 15:03:56.448286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.703 [2024-06-11 15:03:56.448309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.703 [2024-06-11 15:03:56.457039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.703 [2024-06-11 15:03:56.457067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.703 [2024-06-11 15:03:56.468932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.703 [2024-06-11 15:03:56.468956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.703 [2024-06-11 15:03:56.480002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.703 [2024-06-11 15:03:56.480035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.703 [2024-06-11 15:03:56.490979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.703 [2024-06-11 15:03:56.491005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.703 [2024-06-11 15:03:56.506563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.703 [2024-06-11 15:03:56.506587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.703 [2024-06-11 15:03:56.516012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.703 [2024-06-11 15:03:56.516042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.703 [2024-06-11 15:03:56.527767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.703 [2024-06-11 15:03:56.527790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.703 [2024-06-11 15:03:56.538708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.703 [2024-06-11 15:03:56.538732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.549899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.549923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.565383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.565407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.575526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.575548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.586471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.586494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.599180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.599204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.609007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.609034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.623874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.623897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.633481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.633504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.645032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.645055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.655644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.655668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.666533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.666556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.684458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.684482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.695055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.695085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.705904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.705927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.718848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.718871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.728419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.728442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.743051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.743074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.752787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.752811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.764139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.764162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.774649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.774673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.785533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.785556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.962 [2024-06-11 15:03:56.798354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.962 [2024-06-11 15:03:56.798378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.807502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.807525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.820925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.820949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.831312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.831336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.842146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.842170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.853217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.853239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.863989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.864011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.877884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.877906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.887525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.887548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.898804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.898827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.913773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.913796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.922781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.922803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.935838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.935860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.945607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.945630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.957015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.957043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.972609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.972632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.982145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.982167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:56.993495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:56.993517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:57.004309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:57.004332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:57.015289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:57.015311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:57.031924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:57.031951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:57.041544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:57.041566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.222 [2024-06-11 15:03:57.052918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.222 [2024-06-11 15:03:57.052941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.063825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.063848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.074641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.074664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.087206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.087228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.096611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.096633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.107863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.107886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.117951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.117974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.129091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.129114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.143893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.143917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.153706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.153728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.165132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.165156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.175743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.175766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.186377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.186399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.201892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.201915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.212167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.212189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.223039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.223062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.234114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.234137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.244715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.244737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.258368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.258390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.268915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.268938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.279919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.279941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.291173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.291196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.301907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.301929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.482 [2024-06-11 15:03:57.314831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.482 [2024-06-11 15:03:57.314854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.741 [2024-06-11 15:03:57.324920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.741 [2024-06-11 15:03:57.324943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.741 [2024-06-11 15:03:57.336188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.741 [2024-06-11 15:03:57.336211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.741 [2024-06-11 15:03:57.347052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.741 [2024-06-11 15:03:57.347077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.741 [2024-06-11 15:03:57.357702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.741 [2024-06-11 15:03:57.357731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.741 [2024-06-11 15:03:57.368322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.741 [2024-06-11 15:03:57.368345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.741 [2024-06-11 15:03:57.379295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.741 [2024-06-11 15:03:57.379317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.741 [2024-06-11 15:03:57.391718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.741 [2024-06-11 15:03:57.391741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.741 [2024-06-11 15:03:57.401222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.741 [2024-06-11 15:03:57.401244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.412953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.412975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.425437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.425460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.434520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.434542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.447715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.447738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.457851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.457873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.468343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.468367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.483008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.483040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.492201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.492224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.503760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.503782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.516498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.516522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.526080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.526103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.540480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.540503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.550205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.550227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.561475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.561497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.742 [2024-06-11 15:03:57.572300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.742 [2024-06-11 15:03:57.572331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.583286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.583309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.597784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.597807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.607675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.607697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.619204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.619227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.629993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.630015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.640894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.640915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.657758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.657782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.667778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.667801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.679202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.679225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.690056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.690079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.701082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.701104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.714198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.714220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.724627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.724651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.735324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.735347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.745927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.745950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.756609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.756630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.773806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.773829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.783964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.783987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.794630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.794657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.805647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.805670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.816415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.816438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.829422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.829446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.001 [2024-06-11 15:03:57.839822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.001 [2024-06-11 15:03:57.839845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.850853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.850877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.862060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.862085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.873127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.873150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.889428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.889453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.899520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.899544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.910448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.910472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.921762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.921786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.932600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.932623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.948736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.948759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.958203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.958226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.969687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.969711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.980716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.980738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:57.991640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:57.991664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:58.005598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:58.005622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:58.015241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:58.015270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:58.026616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:58.026639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:58.037490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:58.037514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:58.048277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:58.048300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:58.065514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:58.065537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:58.075375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:58.075399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:58.086818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:58.086841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.261 [2024-06-11 15:03:58.097658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.261 [2024-06-11 15:03:58.097682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.108146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.108170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.122490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.122514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.131915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.131938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.143545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.143569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.154226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.154249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.164781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.164804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.179635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.179658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.188946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.188969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.200423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.200447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.211065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.211088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.221699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.221723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.234610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.234637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.244268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.244291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.255627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.255651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.266600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.266624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.277445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.277468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.293761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.293785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.303289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.303311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.314877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.314900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.327912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.327935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.337970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.337994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.352539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.352562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.521 [2024-06-11 15:03:58.362146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.521 [2024-06-11 15:03:58.362169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.373464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.373486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.384427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.384449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.396992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.397015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.406809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.406832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.418152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.418176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.428714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.428737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.439586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.439609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.450252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.450275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.465433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.465457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.475333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.475356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.486759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.486782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.496855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.496880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.508103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.508127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.523179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.523202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.532946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.532967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.543664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.543686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.554728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.554751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.565461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.565483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.578840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.578863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.588274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.588296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.600282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.600306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.610917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.610940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.781 [2024-06-11 15:03:58.622002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.781 [2024-06-11 15:03:58.622031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.638317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.638341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.647750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.647772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.658738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.658761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.669056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.669079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.680483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.680505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.690776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.690798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.702402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.702424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.713261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.713284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.724146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.724168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.734874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.734896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.749570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.749593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.759319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.759341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.770234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.770256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.783199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.783222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.792742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.792764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.807804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.807827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.817470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.817497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.828943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.828965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.839447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.839470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.850008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.850037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.865069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.865091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.041 [2024-06-11 15:03:58.874559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.041 [2024-06-11 15:03:58.874582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.300 [2024-06-11 15:03:58.886163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:58.886186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:58.897262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:58.897285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:58.907983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:58.908008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:58.922382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:58.922407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:58.931471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:58.931494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:58.942805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:58.942827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:58.953750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:58.953773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:58.964973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:58.964994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:58.982234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:58.982257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:58.993061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:58.993084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.004068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.004099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.016752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.016775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.026245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.026267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.042249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.042273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.051935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.051958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.063198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.063220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.072968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.072991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.084733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.084756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.100370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.100398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.109864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.109887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.121256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.121278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.301 [2024-06-11 15:03:59.132018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.301 [2024-06-11 15:03:59.132048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.142776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.142798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.160450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.160473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.170004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.170033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.181046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.181068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.191123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.191146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.202490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.202512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.219599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.219621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.229151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.229174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.240232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.240255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.251310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.251332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.262363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.262386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.278378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.278402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.287460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.287481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.298925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.298947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.311451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.311474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.321020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.321065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.336097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.336121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.345474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.345497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.356840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.356863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.367540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.367564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.378443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.378467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.391453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.391476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.560 [2024-06-11 15:03:59.400729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.560 [2024-06-11 15:03:59.400753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.818 [2024-06-11 15:03:59.412092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.818 [2024-06-11 15:03:59.412115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.818 [2024-06-11 15:03:59.422796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.818 [2024-06-11 15:03:59.422820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.818 [2024-06-11 15:03:59.433921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.818 [2024-06-11 15:03:59.433943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.818 [2024-06-11 15:03:59.446917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.818 [2024-06-11 15:03:59.446940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.818 [2024-06-11 15:03:59.456217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.818 [2024-06-11 15:03:59.456239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.818 [2024-06-11 15:03:59.467563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.818 [2024-06-11 15:03:59.467587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.818 [2024-06-11 15:03:59.478542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.818 [2024-06-11 15:03:59.478565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.818 [2024-06-11 15:03:59.489421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.818 [2024-06-11 15:03:59.489444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.818 [2024-06-11 15:03:59.503820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.818 [2024-06-11 15:03:59.503844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.513770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.513793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.525121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.525146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.535940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.535967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.546866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.546888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.563882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.563904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.573520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.573543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.584877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.584899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.595636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.595660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.606693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.606716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.624514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.624537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.634787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.634811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.645225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.645248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.819 [2024-06-11 15:03:59.656134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.819 [2024-06-11 15:03:59.656156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.667463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.667486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.682445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.682469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.692350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.692373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.703738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.703761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.714640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.714664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.725497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.725521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.738409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.738432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.747978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.748001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.759531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.759558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.772249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.772273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.781533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.781556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.797429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.797452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.807481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.807504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.818478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.818501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.829438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.829461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.840182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.840205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.852976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.852998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.862687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.862710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.874399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.076 [2024-06-11 15:03:59.874421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.076 [2024-06-11 15:03:59.885307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.077 [2024-06-11 15:03:59.885330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.077 [2024-06-11 15:03:59.896360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.077 [2024-06-11 15:03:59.896388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.077 [2024-06-11 15:03:59.911293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.077 [2024-06-11 15:03:59.911316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:03:59.920892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:03:59.920915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:03:59.932155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:03:59.932177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:03:59.942791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:03:59.942813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:03:59.953331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:03:59.953353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:03:59.968241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:03:59.968264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:03:59.977803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:03:59.977831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:03:59.989529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:03:59.989551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.002435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.002458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.012935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.012960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.024250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.024274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.034534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.034557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.045986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.046010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.056925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.056948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.067664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.067686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.085111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.085135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.094831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.094855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.106347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.106370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.116926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.116951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.127689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.127712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.138701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.138725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.149340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.149364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.160397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.160420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.335 [2024-06-11 15:04:00.171096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.335 [2024-06-11 15:04:00.171118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.593 [2024-06-11 15:04:00.181964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.593 [2024-06-11 15:04:00.181986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.593 [2024-06-11 15:04:00.195992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.593 [2024-06-11 15:04:00.196016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.593 [2024-06-11 15:04:00.206292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.593 [2024-06-11 15:04:00.206314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.593 [2024-06-11 15:04:00.217165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.593 [2024-06-11 15:04:00.217187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.593 [2024-06-11 15:04:00.227656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.593 [2024-06-11 15:04:00.227678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.593 [2024-06-11 15:04:00.238631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.238653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.251286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.251309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.260782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.260805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.272127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.272151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.282886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.282908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.293913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.293936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.304511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.304534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.315435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.315457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.326299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.326321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.337005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.337044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.347690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.347712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.364958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.364981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.374654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.374677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.385880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.385902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.396262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.396285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.407172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.407194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.424843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.594 [2024-06-11 15:04:00.424866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.594 [2024-06-11 15:04:00.435185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.852 [2024-06-11 15:04:00.435207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.852 [2024-06-11 15:04:00.445830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.852 [2024-06-11 15:04:00.445852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.852 [2024-06-11 15:04:00.456486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.852 [2024-06-11 15:04:00.456508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.852 [2024-06-11 15:04:00.467155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.852 [2024-06-11 15:04:00.467178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.852 [2024-06-11 15:04:00.489044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.852 [2024-06-11 15:04:00.489068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.852 [2024-06-11 15:04:00.499509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.852 [2024-06-11 15:04:00.499532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.852 [2024-06-11 15:04:00.510066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.852 [2024-06-11 15:04:00.510089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.852 [2024-06-11 15:04:00.520861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.852 [2024-06-11 15:04:00.520885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.531762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.531786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.542745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.542769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.553467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.553489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.564384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.564407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.576953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.576976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.594590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.594613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.605201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.605224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.615915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.615938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.626960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.626983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.637893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.637915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.654758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.654781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.664489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.664513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.675896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.675920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.853 [2024-06-11 15:04:00.686970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.853 [2024-06-11 15:04:00.686993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.111 [2024-06-11 15:04:00.697587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.111 [2024-06-11 15:04:00.697610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.111 [2024-06-11 15:04:00.708237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.111 [2024-06-11 15:04:00.708260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.111 [2024-06-11 15:04:00.719975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.111 [2024-06-11 15:04:00.719998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.111 [2024-06-11 15:04:00.728628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.111 [2024-06-11 15:04:00.728651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.111 00:18:42.111 Latency(us) 00:18:42.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.111 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:42.111 Nvme1n1 : 5.01 11717.50 91.54 0.00 0.00 10912.42 4081.11 27048.49 00:18:42.111 =================================================================================================================== 00:18:42.111 Total : 11717.50 91.54 0.00 0.00 10912.42 4081.11 27048.49 00:18:42.111 [2024-06-11 15:04:00.735479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.111 [2024-06-11 15:04:00.735497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.111 [2024-06-11 15:04:00.743495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.111 [2024-06-11 15:04:00.743514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.111 [2024-06-11 15:04:00.755531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.111 [2024-06-11 15:04:00.755545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.111 [2024-06-11 15:04:00.767577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.767600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.779601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.779617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.791638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.791654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.803667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.803693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.815706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.815723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.827737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.827753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.835755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.835772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.843776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.843793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.851800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.851814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.863838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.863852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.875873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.875887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.883894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.883909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.891915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.891929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.899936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.899949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.911972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.911985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.919992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.920005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.928013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.928034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.936044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.936059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.112 [2024-06-11 15:04:00.944062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.112 [2024-06-11 15:04:00.944075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.370 [2024-06-11 15:04:00.956098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.370 [2024-06-11 15:04:00.956111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3277378) - No such process 00:18:42.370 15:04:00 -- target/zcopy.sh@49 -- # wait 3277378 00:18:42.370 15:04:00 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:42.370 15:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.370 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:18:42.370 15:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.370 15:04:00 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:42.370 15:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.370 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:18:42.370 delay0 00:18:42.370 15:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.370 15:04:00 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:42.370 15:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.370 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:18:42.370 15:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.370 15:04:00 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:42.370 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.370 [2024-06-11 15:04:01.139301] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:48.937 Initializing NVMe Controllers 00:18:48.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:48.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:48.937 Initialization complete. Launching workers. 00:18:48.937 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 79 00:18:48.937 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 366, failed to submit 33 00:18:48.937 success 162, unsuccess 204, failed 0 00:18:48.937 15:04:07 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:48.937 15:04:07 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:48.937 15:04:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:48.937 15:04:07 -- nvmf/common.sh@116 -- # sync 00:18:48.937 15:04:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:48.937 15:04:07 -- nvmf/common.sh@119 -- # set +e 00:18:48.937 15:04:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:48.937 15:04:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:48.937 rmmod nvme_tcp 00:18:48.937 rmmod nvme_fabrics 00:18:48.937 rmmod nvme_keyring 00:18:48.937 15:04:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:48.937 15:04:07 -- nvmf/common.sh@123 -- # set -e 00:18:48.937 15:04:07 -- nvmf/common.sh@124 -- # return 0 00:18:48.937 15:04:07 -- nvmf/common.sh@477 -- # '[' -n 3275255 ']' 00:18:48.937 15:04:07 -- nvmf/common.sh@478 -- # killprocess 3275255 00:18:48.937 15:04:07 -- common/autotest_common.sh@926 -- # '[' -z 3275255 ']' 00:18:48.937 15:04:07 -- common/autotest_common.sh@930 -- # kill -0 3275255 00:18:48.937 15:04:07 -- common/autotest_common.sh@931 -- # uname 00:18:48.937 15:04:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:48.937 15:04:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3275255 00:18:48.937 15:04:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:48.937 15:04:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:48.937 15:04:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3275255' 00:18:48.937 killing process with pid 3275255 00:18:48.937 15:04:07 -- common/autotest_common.sh@945 -- # kill 3275255 00:18:48.937 15:04:07 -- common/autotest_common.sh@950 -- # wait 3275255 00:18:48.937 15:04:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:48.937 15:04:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:48.937 15:04:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:48.937 15:04:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.937 15:04:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:48.937 15:04:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.937 15:04:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.937 15:04:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.843 15:04:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:50.843 00:18:50.843 real 0m32.597s 00:18:50.843 user 0m43.779s 00:18:50.843 sys 0m10.721s 00:18:50.843 15:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.843 15:04:09 -- common/autotest_common.sh@10 -- # set +x 00:18:50.843 ************************************ 00:18:50.843 END TEST nvmf_zcopy 00:18:50.843 ************************************ 00:18:50.843 15:04:09 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:50.843 15:04:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:50.843 15:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:50.843 15:04:09 -- common/autotest_common.sh@10 -- # set +x 00:18:50.843 ************************************ 00:18:50.843 START TEST nvmf_nmic 00:18:50.843 ************************************ 00:18:50.843 15:04:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:51.140 * Looking for test storage... 00:18:51.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.140 15:04:09 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.140 15:04:09 -- nvmf/common.sh@7 -- # uname -s 00:18:51.140 15:04:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.140 15:04:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.140 15:04:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.140 15:04:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.140 15:04:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.140 15:04:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.140 15:04:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.140 15:04:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.140 15:04:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.140 15:04:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.140 15:04:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:51.140 15:04:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:51.140 15:04:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.140 15:04:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.140 15:04:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.140 15:04:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.140 15:04:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.140 15:04:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.140 15:04:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.140 15:04:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.140 15:04:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.140 15:04:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.140 15:04:09 -- paths/export.sh@5 -- # export PATH 00:18:51.141 15:04:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.141 15:04:09 -- nvmf/common.sh@46 -- # : 0 00:18:51.141 15:04:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:51.141 15:04:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:51.141 15:04:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:51.141 15:04:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.141 15:04:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.141 15:04:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:51.141 15:04:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:51.141 15:04:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:51.141 15:04:09 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:51.141 15:04:09 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:51.141 15:04:09 -- target/nmic.sh@14 -- # nvmftestinit 00:18:51.141 15:04:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:51.141 15:04:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.141 15:04:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:51.141 15:04:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:51.141 15:04:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:51.141 15:04:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.141 15:04:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.141 15:04:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.141 15:04:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:51.141 15:04:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:51.141 15:04:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:51.141 15:04:09 -- common/autotest_common.sh@10 -- # set +x 00:18:57.775 15:04:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:57.775 15:04:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:57.775 15:04:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:57.775 15:04:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:57.775 15:04:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:57.775 15:04:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:57.775 15:04:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:57.775 15:04:15 -- nvmf/common.sh@294 -- # net_devs=() 00:18:57.775 15:04:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:57.775 15:04:15 -- nvmf/common.sh@295 -- # e810=() 00:18:57.775 15:04:15 -- nvmf/common.sh@295 -- # local -ga e810 00:18:57.775 15:04:15 -- nvmf/common.sh@296 -- # x722=() 00:18:57.775 15:04:15 -- nvmf/common.sh@296 -- # local -ga x722 00:18:57.775 15:04:15 -- nvmf/common.sh@297 -- # mlx=() 00:18:57.775 15:04:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:57.775 15:04:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.775 15:04:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:57.775 15:04:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:57.776 15:04:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:57.776 15:04:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:57.776 15:04:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:57.776 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:57.776 15:04:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:57.776 15:04:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:57.776 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:57.776 15:04:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:57.776 15:04:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:57.776 15:04:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.776 15:04:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:57.776 15:04:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.776 15:04:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:57.776 Found net devices under 0000:af:00.0: cvl_0_0 00:18:57.776 15:04:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.776 15:04:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:57.776 15:04:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.776 15:04:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:57.776 15:04:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.776 15:04:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:57.776 Found net devices under 0000:af:00.1: cvl_0_1 00:18:57.776 15:04:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.776 15:04:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:57.776 15:04:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:57.776 15:04:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:57.776 15:04:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.776 15:04:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.776 15:04:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.776 15:04:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:57.776 15:04:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.776 15:04:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.776 15:04:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:57.776 15:04:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.776 15:04:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.776 15:04:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:57.776 15:04:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:57.776 15:04:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.776 15:04:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.776 15:04:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.776 15:04:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.776 15:04:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:57.776 15:04:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.776 15:04:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.776 15:04:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.776 15:04:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:57.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:18:57.776 00:18:57.776 --- 10.0.0.2 ping statistics --- 00:18:57.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.776 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:18:57.776 15:04:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:18:57.776 00:18:57.776 --- 10.0.0.1 ping statistics --- 00:18:57.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.776 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:18:57.776 15:04:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.776 15:04:15 -- nvmf/common.sh@410 -- # return 0 00:18:57.776 15:04:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:57.776 15:04:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.776 15:04:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:57.776 15:04:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.776 15:04:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:57.776 15:04:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:57.776 15:04:15 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:57.776 15:04:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:57.776 15:04:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:57.776 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:18:57.776 15:04:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:57.776 15:04:15 -- nvmf/common.sh@469 -- # nvmfpid=3283480 00:18:57.776 15:04:15 -- nvmf/common.sh@470 -- # waitforlisten 3283480 00:18:57.776 15:04:15 -- common/autotest_common.sh@819 -- # '[' -z 3283480 ']' 00:18:57.776 15:04:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.776 15:04:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:57.776 15:04:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.776 15:04:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:57.776 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:18:57.776 [2024-06-11 15:04:16.036556] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:57.776 [2024-06-11 15:04:16.036613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.776 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.776 [2024-06-11 15:04:16.130836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:57.776 [2024-06-11 15:04:16.219210] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:57.776 [2024-06-11 15:04:16.219355] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.776 [2024-06-11 15:04:16.219366] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.776 [2024-06-11 15:04:16.219375] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.776 [2024-06-11 15:04:16.219475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.776 [2024-06-11 15:04:16.219577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.776 [2024-06-11 15:04:16.219695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:57.776 [2024-06-11 15:04:16.219695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.344 15:04:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:58.344 15:04:16 -- common/autotest_common.sh@852 -- # return 0 00:18:58.344 15:04:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:58.344 15:04:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:58.344 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.344 15:04:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.344 15:04:16 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:58.344 15:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.344 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.344 [2024-06-11 15:04:16.918382] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.344 15:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.344 15:04:16 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:58.344 15:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.344 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.344 Malloc0 00:18:58.344 15:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.344 15:04:16 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:58.344 15:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.344 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.344 15:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.344 15:04:16 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.344 15:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.344 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.344 15:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.344 15:04:16 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.344 15:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.344 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.344 [2024-06-11 15:04:16.973959] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.344 15:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.344 15:04:16 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:58.344 test case1: single bdev can't be used in multiple subsystems 00:18:58.344 15:04:16 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:58.344 15:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.344 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.344 15:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.344 15:04:16 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:58.344 15:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.344 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.344 15:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.344 15:04:16 -- target/nmic.sh@28 -- # nmic_status=0 00:18:58.344 15:04:16 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:58.344 15:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.344 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.344 [2024-06-11 15:04:17.001911] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:58.344 [2024-06-11 15:04:17.001934] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:58.344 [2024-06-11 15:04:17.001944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.344 request: 00:18:58.344 { 00:18:58.344 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:58.344 "namespace": { 00:18:58.344 "bdev_name": "Malloc0" 00:18:58.344 }, 00:18:58.344 "method": "nvmf_subsystem_add_ns", 00:18:58.344 "req_id": 1 00:18:58.344 } 00:18:58.344 Got JSON-RPC error response 00:18:58.344 response: 00:18:58.344 { 00:18:58.344 "code": -32602, 00:18:58.344 "message": "Invalid parameters" 00:18:58.344 } 00:18:58.344 15:04:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:18:58.344 15:04:17 -- target/nmic.sh@29 -- # nmic_status=1 00:18:58.344 15:04:17 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:58.345 15:04:17 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:58.345 Adding namespace failed - expected result. 00:18:58.345 15:04:17 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:58.345 test case2: host connect to nvmf target in multiple paths 00:18:58.345 15:04:17 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:58.345 15:04:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.345 15:04:17 -- common/autotest_common.sh@10 -- # set +x 00:18:58.345 [2024-06-11 15:04:17.014047] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:58.345 15:04:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.345 15:04:17 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:59.723 15:04:18 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:01.100 15:04:19 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:01.100 15:04:19 -- common/autotest_common.sh@1177 -- # local i=0 00:19:01.100 15:04:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.100 15:04:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:01.100 15:04:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:03.005 15:04:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:03.005 15:04:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:03.005 15:04:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:03.005 15:04:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:03.005 15:04:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.005 15:04:21 -- common/autotest_common.sh@1187 -- # return 0 00:19:03.005 15:04:21 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:03.005 [global] 00:19:03.005 thread=1 00:19:03.005 invalidate=1 00:19:03.005 rw=write 00:19:03.005 time_based=1 00:19:03.005 runtime=1 00:19:03.005 ioengine=libaio 00:19:03.005 direct=1 00:19:03.005 bs=4096 00:19:03.005 iodepth=1 00:19:03.005 norandommap=0 00:19:03.005 numjobs=1 00:19:03.005 00:19:03.005 verify_dump=1 00:19:03.005 verify_backlog=512 00:19:03.005 verify_state_save=0 00:19:03.005 do_verify=1 00:19:03.005 verify=crc32c-intel 00:19:03.005 [job0] 00:19:03.005 filename=/dev/nvme0n1 00:19:03.005 Could not set queue depth (nvme0n1) 00:19:03.263 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.263 fio-3.35 00:19:03.263 Starting 1 thread 00:19:04.641 00:19:04.641 job0: (groupid=0, jobs=1): err= 0: pid=3284777: Tue Jun 11 15:04:23 2024 00:19:04.641 read: IOPS=1061, BW=4248KiB/s (4350kB/s)(4252KiB/1001msec) 00:19:04.641 slat (nsec): min=6488, max=48725, avg=9859.02, stdev=5606.32 00:19:04.641 clat (usec): min=354, max=1669, avg=540.91, stdev=91.65 00:19:04.641 lat (usec): min=362, max=1691, avg=550.77, stdev=94.35 00:19:04.641 clat percentiles (usec): 00:19:04.641 | 1.00th=[ 420], 5.00th=[ 441], 10.00th=[ 449], 20.00th=[ 453], 00:19:04.641 | 30.00th=[ 465], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 562], 00:19:04.641 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 652], 95.00th=[ 685], 00:19:04.641 | 99.00th=[ 709], 99.50th=[ 725], 99.90th=[ 1663], 99.95th=[ 1663], 00:19:04.641 | 99.99th=[ 1663] 00:19:04.641 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:04.641 slat (nsec): min=9025, max=39492, avg=10488.97, stdev=1928.03 00:19:04.641 clat (usec): min=197, max=578, avg=255.04, stdev=31.17 00:19:04.641 lat (usec): min=207, max=617, avg=265.53, stdev=31.88 00:19:04.641 clat percentiles (usec): 00:19:04.641 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 225], 00:19:04.641 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 260], 60.00th=[ 273], 00:19:04.641 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 293], 00:19:04.641 | 99.00th=[ 322], 99.50th=[ 343], 99.90th=[ 562], 99.95th=[ 578], 00:19:04.641 | 99.99th=[ 578] 00:19:04.641 bw ( KiB/s): min= 7488, max= 7488, per=100.00%, avg=7488.00, stdev= 0.00, samples=1 00:19:04.641 iops : min= 1872, max= 1872, avg=1872.00, stdev= 0.00, samples=1 00:19:04.641 lat (usec) : 250=28.13%, 500=45.67%, 750=26.05%, 1000=0.08% 00:19:04.641 lat (msec) : 2=0.08% 00:19:04.641 cpu : usr=1.10%, sys=3.60%, ctx=2599, majf=0, minf=2 00:19:04.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.641 issued rwts: total=1063,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.641 00:19:04.641 Run status group 0 (all jobs): 00:19:04.641 READ: bw=4248KiB/s (4350kB/s), 4248KiB/s-4248KiB/s (4350kB/s-4350kB/s), io=4252KiB (4354kB), run=1001-1001msec 00:19:04.641 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:19:04.641 00:19:04.641 Disk stats (read/write): 00:19:04.641 nvme0n1: ios=1074/1260, merge=0/0, ticks=589/313, in_queue=902, util=92.89% 00:19:04.641 15:04:23 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:04.641 15:04:23 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:04.641 15:04:23 -- common/autotest_common.sh@1198 -- # local i=0 00:19:04.641 15:04:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:04.641 15:04:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:04.641 15:04:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:04.641 15:04:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:04.641 15:04:23 -- common/autotest_common.sh@1210 -- # return 0 00:19:04.641 15:04:23 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:04.641 15:04:23 -- target/nmic.sh@53 -- # nvmftestfini 00:19:04.641 15:04:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:04.641 15:04:23 -- nvmf/common.sh@116 -- # sync 00:19:04.641 15:04:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:04.641 15:04:23 -- nvmf/common.sh@119 -- # set +e 00:19:04.641 15:04:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:04.641 15:04:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:04.641 rmmod nvme_tcp 00:19:04.641 rmmod nvme_fabrics 00:19:04.641 rmmod nvme_keyring 00:19:04.900 15:04:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:04.900 15:04:23 -- nvmf/common.sh@123 -- # set -e 00:19:04.900 15:04:23 -- nvmf/common.sh@124 -- # return 0 00:19:04.900 15:04:23 -- nvmf/common.sh@477 -- # '[' -n 3283480 ']' 00:19:04.900 15:04:23 -- nvmf/common.sh@478 -- # killprocess 3283480 00:19:04.900 15:04:23 -- common/autotest_common.sh@926 -- # '[' -z 3283480 ']' 00:19:04.900 15:04:23 -- common/autotest_common.sh@930 -- # kill -0 3283480 00:19:04.900 15:04:23 -- common/autotest_common.sh@931 -- # uname 00:19:04.900 15:04:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:04.900 15:04:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3283480 00:19:04.900 15:04:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:04.900 15:04:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:04.900 15:04:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3283480' 00:19:04.900 killing process with pid 3283480 00:19:04.900 15:04:23 -- common/autotest_common.sh@945 -- # kill 3283480 00:19:04.900 15:04:23 -- common/autotest_common.sh@950 -- # wait 3283480 00:19:05.159 15:04:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:05.159 15:04:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:05.159 15:04:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:05.159 15:04:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.159 15:04:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:05.159 15:04:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.159 15:04:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.159 15:04:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.064 15:04:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:07.064 00:19:07.065 real 0m16.187s 00:19:07.065 user 0m42.176s 00:19:07.065 sys 0m5.520s 00:19:07.065 15:04:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.065 15:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 ************************************ 00:19:07.065 END TEST nvmf_nmic 00:19:07.065 ************************************ 00:19:07.065 15:04:25 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:07.065 15:04:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:07.065 15:04:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:07.065 15:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:07.325 ************************************ 00:19:07.325 START TEST nvmf_fio_target 00:19:07.325 ************************************ 00:19:07.325 15:04:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:07.325 * Looking for test storage... 00:19:07.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:07.325 15:04:25 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.325 15:04:25 -- nvmf/common.sh@7 -- # uname -s 00:19:07.325 15:04:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.325 15:04:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.325 15:04:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.325 15:04:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.325 15:04:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.325 15:04:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.325 15:04:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.325 15:04:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.325 15:04:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.325 15:04:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.325 15:04:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:07.325 15:04:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:07.325 15:04:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.325 15:04:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.325 15:04:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.325 15:04:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.325 15:04:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.325 15:04:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.325 15:04:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.325 15:04:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.325 15:04:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.325 15:04:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.325 15:04:26 -- paths/export.sh@5 -- # export PATH 00:19:07.326 15:04:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.326 15:04:26 -- nvmf/common.sh@46 -- # : 0 00:19:07.326 15:04:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:07.326 15:04:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:07.326 15:04:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:07.326 15:04:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.326 15:04:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.326 15:04:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:07.326 15:04:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:07.326 15:04:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:07.326 15:04:26 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.326 15:04:26 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.326 15:04:26 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:07.326 15:04:26 -- target/fio.sh@16 -- # nvmftestinit 00:19:07.326 15:04:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:07.326 15:04:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.326 15:04:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:07.326 15:04:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:07.326 15:04:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:07.326 15:04:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.326 15:04:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.326 15:04:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.326 15:04:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:07.326 15:04:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:07.326 15:04:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:07.326 15:04:26 -- common/autotest_common.sh@10 -- # set +x 00:19:13.897 15:04:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:13.897 15:04:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:13.897 15:04:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:13.897 15:04:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:13.897 15:04:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:13.897 15:04:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:13.897 15:04:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:13.897 15:04:32 -- nvmf/common.sh@294 -- # net_devs=() 00:19:13.897 15:04:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:13.897 15:04:32 -- nvmf/common.sh@295 -- # e810=() 00:19:13.897 15:04:32 -- nvmf/common.sh@295 -- # local -ga e810 00:19:13.897 15:04:32 -- nvmf/common.sh@296 -- # x722=() 00:19:13.897 15:04:32 -- nvmf/common.sh@296 -- # local -ga x722 00:19:13.897 15:04:32 -- nvmf/common.sh@297 -- # mlx=() 00:19:13.897 15:04:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:13.897 15:04:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.897 15:04:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:13.897 15:04:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:13.897 15:04:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:13.897 15:04:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.897 15:04:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:13.897 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:13.897 15:04:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.897 15:04:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:13.897 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:13.897 15:04:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:13.897 15:04:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.897 15:04:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.897 15:04:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.897 15:04:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.897 15:04:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:13.897 Found net devices under 0000:af:00.0: cvl_0_0 00:19:13.897 15:04:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.897 15:04:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.897 15:04:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.897 15:04:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.897 15:04:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.897 15:04:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:13.897 Found net devices under 0000:af:00.1: cvl_0_1 00:19:13.897 15:04:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.897 15:04:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:13.897 15:04:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:13.897 15:04:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:13.897 15:04:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.897 15:04:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.897 15:04:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:13.897 15:04:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:13.897 15:04:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:13.897 15:04:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:13.897 15:04:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:13.897 15:04:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:13.897 15:04:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.897 15:04:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:13.897 15:04:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:13.897 15:04:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:13.897 15:04:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:13.897 15:04:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:13.897 15:04:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:13.897 15:04:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:13.897 15:04:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:13.897 15:04:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:13.897 15:04:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:13.897 15:04:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:13.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:19:13.897 00:19:13.897 --- 10.0.0.2 ping statistics --- 00:19:13.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.897 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:19:13.897 15:04:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:13.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:19:13.897 00:19:13.897 --- 10.0.0.1 ping statistics --- 00:19:13.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.897 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:19:13.897 15:04:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.897 15:04:32 -- nvmf/common.sh@410 -- # return 0 00:19:13.897 15:04:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:13.897 15:04:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.897 15:04:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:13.897 15:04:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.897 15:04:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:13.897 15:04:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:13.897 15:04:32 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:13.897 15:04:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:13.898 15:04:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:13.898 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:19:13.898 15:04:32 -- nvmf/common.sh@469 -- # nvmfpid=3289058 00:19:13.898 15:04:32 -- nvmf/common.sh@470 -- # waitforlisten 3289058 00:19:13.898 15:04:32 -- common/autotest_common.sh@819 -- # '[' -z 3289058 ']' 00:19:13.898 15:04:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.898 15:04:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:13.898 15:04:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:13.898 15:04:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.898 15:04:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:13.898 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:19:13.898 [2024-06-11 15:04:32.537041] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:13.898 [2024-06-11 15:04:32.537097] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.898 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.898 [2024-06-11 15:04:32.630513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:13.898 [2024-06-11 15:04:32.719227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:13.898 [2024-06-11 15:04:32.719371] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.898 [2024-06-11 15:04:32.719382] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.898 [2024-06-11 15:04:32.719390] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.898 [2024-06-11 15:04:32.719430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.898 [2024-06-11 15:04:32.719541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.898 [2024-06-11 15:04:32.719646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.898 [2024-06-11 15:04:32.719646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.834 15:04:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:14.834 15:04:33 -- common/autotest_common.sh@852 -- # return 0 00:19:14.834 15:04:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:14.834 15:04:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:14.834 15:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:14.834 15:04:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.834 15:04:33 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:15.093 [2024-06-11 15:04:33.732564] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.093 15:04:33 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.352 15:04:34 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:15.352 15:04:34 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.611 15:04:34 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:15.611 15:04:34 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.871 15:04:34 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:15.871 15:04:34 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:16.130 15:04:34 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:16.130 15:04:34 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:16.389 15:04:35 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:16.648 15:04:35 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:16.648 15:04:35 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:16.907 15:04:35 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:16.907 15:04:35 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.166 15:04:35 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:17.166 15:04:35 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:17.424 15:04:36 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:17.683 15:04:36 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:17.683 15:04:36 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:17.943 15:04:36 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:17.943 15:04:36 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:17.943 15:04:36 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.201 [2024-06-11 15:04:36.940013] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.201 15:04:36 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:18.460 15:04:37 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:18.719 15:04:37 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:20.094 15:04:38 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:20.094 15:04:38 -- common/autotest_common.sh@1177 -- # local i=0 00:19:20.094 15:04:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:20.094 15:04:38 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:20.094 15:04:38 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:20.094 15:04:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:21.995 15:04:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:21.995 15:04:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:21.995 15:04:40 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:21.995 15:04:40 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:21.995 15:04:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:21.995 15:04:40 -- common/autotest_common.sh@1187 -- # return 0 00:19:21.995 15:04:40 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:21.995 [global] 00:19:21.995 thread=1 00:19:21.995 invalidate=1 00:19:21.995 rw=write 00:19:21.995 time_based=1 00:19:21.995 runtime=1 00:19:21.995 ioengine=libaio 00:19:21.995 direct=1 00:19:21.995 bs=4096 00:19:21.995 iodepth=1 00:19:21.995 norandommap=0 00:19:21.995 numjobs=1 00:19:21.995 00:19:21.995 verify_dump=1 00:19:21.995 verify_backlog=512 00:19:21.995 verify_state_save=0 00:19:21.995 do_verify=1 00:19:21.995 verify=crc32c-intel 00:19:21.995 [job0] 00:19:21.995 filename=/dev/nvme0n1 00:19:21.995 [job1] 00:19:21.995 filename=/dev/nvme0n2 00:19:21.995 [job2] 00:19:21.995 filename=/dev/nvme0n3 00:19:21.995 [job3] 00:19:21.995 filename=/dev/nvme0n4 00:19:22.275 Could not set queue depth (nvme0n1) 00:19:22.275 Could not set queue depth (nvme0n2) 00:19:22.275 Could not set queue depth (nvme0n3) 00:19:22.275 Could not set queue depth (nvme0n4) 00:19:22.532 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:22.532 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:22.532 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:22.532 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:22.532 fio-3.35 00:19:22.532 Starting 4 threads 00:19:23.910 00:19:23.911 job0: (groupid=0, jobs=1): err= 0: pid=3290712: Tue Jun 11 15:04:42 2024 00:19:23.911 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:23.911 slat (nsec): min=7504, max=42684, avg=8516.00, stdev=1437.91 00:19:23.911 clat (usec): min=461, max=694, avg=571.16, stdev=25.96 00:19:23.911 lat (usec): min=469, max=702, avg=579.67, stdev=26.10 00:19:23.911 clat percentiles (usec): 00:19:23.911 | 1.00th=[ 478], 5.00th=[ 529], 10.00th=[ 545], 20.00th=[ 553], 00:19:23.911 | 30.00th=[ 562], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 578], 00:19:23.911 | 70.00th=[ 586], 80.00th=[ 586], 90.00th=[ 594], 95.00th=[ 603], 00:19:23.911 | 99.00th=[ 627], 99.50th=[ 627], 99.90th=[ 660], 99.95th=[ 693], 00:19:23.911 | 99.99th=[ 693] 00:19:23.911 write: IOPS=1190, BW=4763KiB/s (4878kB/s)(4768KiB/1001msec); 0 zone resets 00:19:23.911 slat (usec): min=10, max=33158, avg=64.84, stdev=1281.57 00:19:23.911 clat (usec): min=216, max=714, avg=269.37, stdev=34.09 00:19:23.911 lat (usec): min=228, max=33537, avg=334.20, stdev=1285.33 00:19:23.911 clat percentiles (usec): 00:19:23.911 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 247], 00:19:23.911 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:19:23.911 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 318], 00:19:23.911 | 99.00th=[ 383], 99.50th=[ 478], 99.90th=[ 586], 99.95th=[ 717], 00:19:23.911 | 99.99th=[ 717] 00:19:23.911 bw ( KiB/s): min= 4087, max= 4087, per=27.48%, avg=4087.00, stdev= 0.00, samples=1 00:19:23.911 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:19:23.911 lat (usec) : 250=13.00%, 500=42.10%, 750=44.90% 00:19:23.911 cpu : usr=1.60%, sys=4.10%, ctx=2221, majf=0, minf=1 00:19:23.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.911 issued rwts: total=1024,1192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.911 job1: (groupid=0, jobs=1): err= 0: pid=3290723: Tue Jun 11 15:04:42 2024 00:19:23.911 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:19:23.911 slat (nsec): min=9788, max=22930, avg=21381.91, stdev=3705.71 00:19:23.911 clat (usec): min=532, max=41982, avg=39233.33, stdev=8649.76 00:19:23.911 lat (usec): min=554, max=42005, avg=39254.71, stdev=8649.49 00:19:23.911 clat percentiles (usec): 00:19:23.911 | 1.00th=[ 529], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:23.911 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:23.911 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:19:23.911 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:23.911 | 99.99th=[42206] 00:19:23.911 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:19:23.911 slat (nsec): min=8916, max=35829, avg=10229.48, stdev=2334.79 00:19:23.911 clat (usec): min=189, max=1547, avg=261.26, stdev=67.20 00:19:23.911 lat (usec): min=198, max=1558, avg=271.49, stdev=67.84 00:19:23.911 clat percentiles (usec): 00:19:23.911 | 1.00th=[ 198], 5.00th=[ 210], 10.00th=[ 223], 20.00th=[ 235], 00:19:23.911 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 269], 00:19:23.911 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:19:23.911 | 99.00th=[ 351], 99.50th=[ 510], 99.90th=[ 1549], 99.95th=[ 1549], 00:19:23.911 | 99.99th=[ 1549] 00:19:23.911 bw ( KiB/s): min= 4087, max= 4087, per=27.48%, avg=4087.00, stdev= 0.00, samples=1 00:19:23.911 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:19:23.911 lat (usec) : 250=38.58%, 500=56.74%, 750=0.56% 00:19:23.911 lat (msec) : 2=0.19%, 50=3.93% 00:19:23.911 cpu : usr=0.10%, sys=0.60%, ctx=534, majf=0, minf=1 00:19:23.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.911 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.911 job2: (groupid=0, jobs=1): err= 0: pid=3290739: Tue Jun 11 15:04:42 2024 00:19:23.911 read: IOPS=1227, BW=4911KiB/s (5029kB/s)(4916KiB/1001msec) 00:19:23.911 slat (nsec): min=6197, max=40064, avg=7233.39, stdev=2251.87 00:19:23.911 clat (usec): min=358, max=788, avg=453.98, stdev=33.24 00:19:23.911 lat (usec): min=365, max=795, avg=461.21, stdev=34.08 00:19:23.911 clat percentiles (usec): 00:19:23.911 | 1.00th=[ 388], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 437], 00:19:23.911 | 30.00th=[ 445], 40.00th=[ 449], 50.00th=[ 453], 60.00th=[ 457], 00:19:23.911 | 70.00th=[ 461], 80.00th=[ 465], 90.00th=[ 474], 95.00th=[ 486], 00:19:23.911 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 717], 99.95th=[ 791], 00:19:23.911 | 99.99th=[ 791] 00:19:23.911 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:23.911 slat (nsec): min=8845, max=42205, avg=10098.98, stdev=1509.91 00:19:23.911 clat (usec): min=216, max=474, avg=268.27, stdev=27.95 00:19:23.911 lat (usec): min=226, max=516, avg=278.37, stdev=28.24 00:19:23.911 clat percentiles (usec): 00:19:23.911 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:19:23.911 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:19:23.911 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 318], 00:19:23.911 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 396], 99.95th=[ 474], 00:19:23.911 | 99.99th=[ 474] 00:19:23.911 bw ( KiB/s): min= 7121, max= 7121, per=47.88%, avg=7121.00, stdev= 0.00, samples=1 00:19:23.911 iops : min= 1780, max= 1780, avg=1780.00, stdev= 0.00, samples=1 00:19:23.911 lat (usec) : 250=17.72%, 500=80.61%, 750=1.63%, 1000=0.04% 00:19:23.911 cpu : usr=1.80%, sys=2.10%, ctx=2765, majf=0, minf=2 00:19:23.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.911 issued rwts: total=1229,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.911 job3: (groupid=0, jobs=1): err= 0: pid=3290745: Tue Jun 11 15:04:42 2024 00:19:23.911 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:19:23.911 slat (nsec): min=9421, max=24012, avg=22459.95, stdev=3022.55 00:19:23.911 clat (usec): min=40867, max=42123, avg=41173.34, stdev=417.46 00:19:23.911 lat (usec): min=40891, max=42146, avg=41195.80, stdev=416.90 00:19:23.911 clat percentiles (usec): 00:19:23.911 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:23.911 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:23.911 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:23.911 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:23.911 | 99.99th=[42206] 00:19:23.911 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:19:23.911 slat (nsec): min=9303, max=42573, avg=12487.36, stdev=4513.24 00:19:23.911 clat (usec): min=191, max=655, avg=264.88, stdev=46.39 00:19:23.911 lat (usec): min=204, max=667, avg=277.36, stdev=46.70 00:19:23.911 clat percentiles (usec): 00:19:23.911 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 229], 00:19:23.911 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 273], 00:19:23.911 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 343], 00:19:23.911 | 99.00th=[ 445], 99.50th=[ 482], 99.90th=[ 660], 99.95th=[ 660], 00:19:23.911 | 99.99th=[ 660] 00:19:23.911 bw ( KiB/s): min= 4087, max= 4087, per=27.48%, avg=4087.00, stdev= 0.00, samples=1 00:19:23.911 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:19:23.911 lat (usec) : 250=36.40%, 500=59.29%, 750=0.38% 00:19:23.911 lat (msec) : 50=3.94% 00:19:23.911 cpu : usr=0.20%, sys=0.69%, ctx=533, majf=0, minf=1 00:19:23.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.911 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.911 00:19:23.911 Run status group 0 (all jobs): 00:19:23.911 READ: bw=9102KiB/s (9321kB/s), 83.2KiB/s-4911KiB/s (85.2kB/s-5029kB/s), io=9184KiB (9404kB), run=1001-1009msec 00:19:23.911 WRITE: bw=14.5MiB/s (15.2MB/s), 2030KiB/s-6138KiB/s (2078kB/s-6285kB/s), io=14.7MiB (15.4MB), run=1001-1009msec 00:19:23.911 00:19:23.911 Disk stats (read/write): 00:19:23.911 nvme0n1: ios=893/1024, merge=0/0, ticks=1214/272, in_queue=1486, util=98.70% 00:19:23.911 nvme0n2: ios=53/512, merge=0/0, ticks=735/130, in_queue=865, util=88.09% 00:19:23.911 nvme0n3: ios=1024/1309, merge=0/0, ticks=467/346, in_queue=813, util=88.91% 00:19:23.911 nvme0n4: ios=38/512, merge=0/0, ticks=901/137, in_queue=1038, util=91.56% 00:19:23.911 15:04:42 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:23.911 [global] 00:19:23.911 thread=1 00:19:23.911 invalidate=1 00:19:23.911 rw=randwrite 00:19:23.911 time_based=1 00:19:23.911 runtime=1 00:19:23.911 ioengine=libaio 00:19:23.911 direct=1 00:19:23.911 bs=4096 00:19:23.911 iodepth=1 00:19:23.911 norandommap=0 00:19:23.911 numjobs=1 00:19:23.911 00:19:23.911 verify_dump=1 00:19:23.911 verify_backlog=512 00:19:23.911 verify_state_save=0 00:19:23.911 do_verify=1 00:19:23.911 verify=crc32c-intel 00:19:23.911 [job0] 00:19:23.911 filename=/dev/nvme0n1 00:19:23.911 [job1] 00:19:23.911 filename=/dev/nvme0n2 00:19:23.911 [job2] 00:19:23.911 filename=/dev/nvme0n3 00:19:23.911 [job3] 00:19:23.911 filename=/dev/nvme0n4 00:19:23.911 Could not set queue depth (nvme0n1) 00:19:23.911 Could not set queue depth (nvme0n2) 00:19:23.911 Could not set queue depth (nvme0n3) 00:19:23.911 Could not set queue depth (nvme0n4) 00:19:24.170 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:24.170 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:24.170 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:24.170 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:24.170 fio-3.35 00:19:24.170 Starting 4 threads 00:19:25.556 00:19:25.556 job0: (groupid=0, jobs=1): err= 0: pid=3291182: Tue Jun 11 15:04:44 2024 00:19:25.556 read: IOPS=511, BW=2045KiB/s (2094kB/s)(2084KiB/1019msec) 00:19:25.556 slat (nsec): min=7602, max=17733, avg=8805.71, stdev=1189.51 00:19:25.556 clat (usec): min=422, max=41985, avg=1403.69, stdev=5851.09 00:19:25.556 lat (usec): min=431, max=41995, avg=1412.50, stdev=5851.30 00:19:25.556 clat percentiles (usec): 00:19:25.556 | 1.00th=[ 429], 5.00th=[ 469], 10.00th=[ 510], 20.00th=[ 537], 00:19:25.556 | 30.00th=[ 545], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 553], 00:19:25.556 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 578], 95.00th=[ 603], 00:19:25.556 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:25.556 | 99.99th=[42206] 00:19:25.556 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:19:25.556 slat (nsec): min=9048, max=38447, avg=11199.93, stdev=2090.05 00:19:25.556 clat (usec): min=168, max=471, avg=254.26, stdev=54.77 00:19:25.556 lat (usec): min=178, max=483, avg=265.46, stdev=55.77 00:19:25.556 clat percentiles (usec): 00:19:25.556 | 1.00th=[ 182], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:19:25.556 | 30.00th=[ 217], 40.00th=[ 229], 50.00th=[ 243], 60.00th=[ 255], 00:19:25.556 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 334], 95.00th=[ 375], 00:19:25.556 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[ 453], 99.95th=[ 474], 00:19:25.556 | 99.99th=[ 474] 00:19:25.556 bw ( KiB/s): min= 4096, max= 4096, per=25.48%, avg=4096.00, stdev= 0.00, samples=2 00:19:25.556 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:19:25.556 lat (usec) : 250=37.61%, 500=31.59%, 750=30.10% 00:19:25.556 lat (msec) : 50=0.71% 00:19:25.556 cpu : usr=1.08%, sys=1.38%, ctx=1547, majf=0, minf=1 00:19:25.556 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.556 issued rwts: total=521,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.556 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.556 job1: (groupid=0, jobs=1): err= 0: pid=3291194: Tue Jun 11 15:04:44 2024 00:19:25.556 read: IOPS=645, BW=2583KiB/s (2645kB/s)(2604KiB/1008msec) 00:19:25.556 slat (nsec): min=7355, max=29124, avg=8241.30, stdev=1201.04 00:19:25.556 clat (usec): min=475, max=41954, avg=993.48, stdev=3879.07 00:19:25.556 lat (usec): min=483, max=41965, avg=1001.72, stdev=3879.39 00:19:25.556 clat percentiles (usec): 00:19:25.556 | 1.00th=[ 498], 5.00th=[ 537], 10.00th=[ 545], 20.00th=[ 562], 00:19:25.556 | 30.00th=[ 578], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 619], 00:19:25.556 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 766], 00:19:25.556 | 99.00th=[ 1074], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:25.556 | 99.99th=[42206] 00:19:25.556 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:19:25.556 slat (nsec): min=10378, max=49464, avg=12229.61, stdev=2253.99 00:19:25.557 clat (usec): min=225, max=513, avg=323.68, stdev=66.28 00:19:25.557 lat (usec): min=236, max=546, avg=335.91, stdev=66.42 00:19:25.557 clat percentiles (usec): 00:19:25.557 | 1.00th=[ 239], 5.00th=[ 258], 10.00th=[ 273], 20.00th=[ 281], 00:19:25.557 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:19:25.557 | 70.00th=[ 322], 80.00th=[ 388], 90.00th=[ 441], 95.00th=[ 474], 00:19:25.557 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 515], 99.95th=[ 515], 00:19:25.557 | 99.99th=[ 515] 00:19:25.557 bw ( KiB/s): min= 3912, max= 4280, per=25.48%, avg=4096.00, stdev=260.22, samples=2 00:19:25.557 iops : min= 978, max= 1070, avg=1024.00, stdev=65.05, samples=2 00:19:25.557 lat (usec) : 250=1.91%, 500=59.22%, 750=36.00%, 1000=2.33% 00:19:25.557 lat (msec) : 2=0.18%, 50=0.36% 00:19:25.557 cpu : usr=1.19%, sys=3.08%, ctx=1677, majf=0, minf=1 00:19:25.557 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.557 issued rwts: total=651,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.557 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.557 job2: (groupid=0, jobs=1): err= 0: pid=3291217: Tue Jun 11 15:04:44 2024 00:19:25.557 read: IOPS=1068, BW=4276KiB/s (4378kB/s)(4280KiB/1001msec) 00:19:25.557 slat (nsec): min=6546, max=38273, avg=9315.24, stdev=4809.36 00:19:25.557 clat (usec): min=305, max=1702, avg=491.94, stdev=80.46 00:19:25.557 lat (usec): min=312, max=1710, avg=501.25, stdev=81.03 00:19:25.557 clat percentiles (usec): 00:19:25.557 | 1.00th=[ 408], 5.00th=[ 424], 10.00th=[ 429], 20.00th=[ 437], 00:19:25.557 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 474], 60.00th=[ 490], 00:19:25.557 | 70.00th=[ 502], 80.00th=[ 519], 90.00th=[ 635], 95.00th=[ 660], 00:19:25.557 | 99.00th=[ 709], 99.50th=[ 750], 99.90th=[ 865], 99.95th=[ 1696], 00:19:25.557 | 99.99th=[ 1696] 00:19:25.557 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:25.557 slat (usec): min=8, max=124, avg=10.29, stdev= 4.51 00:19:25.557 clat (usec): min=203, max=570, avg=287.33, stdev=46.73 00:19:25.557 lat (usec): min=213, max=580, avg=297.61, stdev=47.40 00:19:25.557 clat percentiles (usec): 00:19:25.557 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 237], 20.00th=[ 251], 00:19:25.557 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:19:25.557 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 359], 95.00th=[ 392], 00:19:25.557 | 99.00th=[ 412], 99.50th=[ 445], 99.90th=[ 502], 99.95th=[ 570], 00:19:25.557 | 99.99th=[ 570] 00:19:25.557 bw ( KiB/s): min= 6232, max= 6232, per=38.76%, avg=6232.00, stdev= 0.00, samples=1 00:19:25.557 iops : min= 1558, max= 1558, avg=1558.00, stdev= 0.00, samples=1 00:19:25.557 lat (usec) : 250=10.94%, 500=76.09%, 750=12.74%, 1000=0.19% 00:19:25.557 lat (msec) : 2=0.04% 00:19:25.557 cpu : usr=1.00%, sys=3.10%, ctx=2607, majf=0, minf=1 00:19:25.557 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.557 issued rwts: total=1070,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.557 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.557 job3: (groupid=0, jobs=1): err= 0: pid=3291223: Tue Jun 11 15:04:44 2024 00:19:25.557 read: IOPS=20, BW=82.6KiB/s (84.6kB/s)(84.0KiB/1017msec) 00:19:25.557 slat (nsec): min=9979, max=26110, avg=15370.48, stdev=4656.54 00:19:25.557 clat (usec): min=600, max=41939, avg=39199.12, stdev=8849.36 00:19:25.557 lat (usec): min=623, max=41955, avg=39214.49, stdev=8847.58 00:19:25.557 clat percentiles (usec): 00:19:25.557 | 1.00th=[ 603], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:25.557 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:25.557 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:19:25.557 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:25.557 | 99.99th=[41681] 00:19:25.557 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:19:25.557 slat (nsec): min=8526, max=37157, avg=12700.94, stdev=2513.44 00:19:25.557 clat (usec): min=250, max=720, avg=347.45, stdev=89.75 00:19:25.557 lat (usec): min=262, max=732, avg=360.15, stdev=90.29 00:19:25.557 clat percentiles (usec): 00:19:25.557 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 281], 00:19:25.557 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 322], 00:19:25.557 | 70.00th=[ 379], 80.00th=[ 408], 90.00th=[ 482], 95.00th=[ 529], 00:19:25.557 | 99.00th=[ 644], 99.50th=[ 685], 99.90th=[ 717], 99.95th=[ 717], 00:19:25.557 | 99.99th=[ 717] 00:19:25.557 bw ( KiB/s): min= 4096, max= 4096, per=25.48%, avg=4096.00, stdev= 0.00, samples=1 00:19:25.557 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:25.557 lat (usec) : 500=88.37%, 750=7.88% 00:19:25.557 lat (msec) : 50=3.75% 00:19:25.557 cpu : usr=0.89%, sys=0.39%, ctx=534, majf=0, minf=2 00:19:25.557 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.557 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.557 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.557 00:19:25.557 Run status group 0 (all jobs): 00:19:25.557 READ: bw=8883KiB/s (9096kB/s), 82.6KiB/s-4276KiB/s (84.6kB/s-4378kB/s), io=9052KiB (9269kB), run=1001-1019msec 00:19:25.557 WRITE: bw=15.7MiB/s (16.5MB/s), 2014KiB/s-6138KiB/s (2062kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1019msec 00:19:25.557 00:19:25.557 Disk stats (read/write): 00:19:25.557 nvme0n1: ios=550/1024, merge=0/0, ticks=771/248, in_queue=1019, util=96.29% 00:19:25.557 nvme0n2: ios=668/1024, merge=0/0, ticks=922/319, in_queue=1241, util=96.94% 00:19:25.557 nvme0n3: ios=1024/1046, merge=0/0, ticks=496/297, in_queue=793, util=88.31% 00:19:25.557 nvme0n4: ios=54/512, merge=0/0, ticks=1717/174, in_queue=1891, util=96.34% 00:19:25.557 15:04:44 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:25.557 [global] 00:19:25.557 thread=1 00:19:25.557 invalidate=1 00:19:25.557 rw=write 00:19:25.557 time_based=1 00:19:25.557 runtime=1 00:19:25.557 ioengine=libaio 00:19:25.557 direct=1 00:19:25.557 bs=4096 00:19:25.557 iodepth=128 00:19:25.557 norandommap=0 00:19:25.557 numjobs=1 00:19:25.557 00:19:25.557 verify_dump=1 00:19:25.557 verify_backlog=512 00:19:25.557 verify_state_save=0 00:19:25.557 do_verify=1 00:19:25.557 verify=crc32c-intel 00:19:25.557 [job0] 00:19:25.557 filename=/dev/nvme0n1 00:19:25.557 [job1] 00:19:25.557 filename=/dev/nvme0n2 00:19:25.557 [job2] 00:19:25.557 filename=/dev/nvme0n3 00:19:25.557 [job3] 00:19:25.557 filename=/dev/nvme0n4 00:19:25.557 Could not set queue depth (nvme0n1) 00:19:25.557 Could not set queue depth (nvme0n2) 00:19:25.557 Could not set queue depth (nvme0n3) 00:19:25.557 Could not set queue depth (nvme0n4) 00:19:25.816 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.816 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.816 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.816 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.816 fio-3.35 00:19:25.816 Starting 4 threads 00:19:27.192 00:19:27.192 job0: (groupid=0, jobs=1): err= 0: pid=3291648: Tue Jun 11 15:04:45 2024 00:19:27.192 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:19:27.192 slat (nsec): min=1574, max=31884k, avg=94370.09, stdev=831049.74 00:19:27.192 clat (usec): min=1843, max=54837, avg=12552.72, stdev=7351.35 00:19:27.192 lat (usec): min=1849, max=54840, avg=12647.09, stdev=7413.85 00:19:27.192 clat percentiles (usec): 00:19:27.192 | 1.00th=[ 3392], 5.00th=[ 5800], 10.00th=[ 7111], 20.00th=[ 7898], 00:19:27.192 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[10945], 60.00th=[11338], 00:19:27.192 | 70.00th=[11863], 80.00th=[14091], 90.00th=[18744], 95.00th=[27132], 00:19:27.192 | 99.00th=[39584], 99.50th=[47449], 99.90th=[54789], 99.95th=[54789], 00:19:27.192 | 99.99th=[54789] 00:19:27.192 write: IOPS=4588, BW=17.9MiB/s (18.8MB/s)(18.2MiB/1013msec); 0 zone resets 00:19:27.192 slat (usec): min=2, max=20898, avg=109.15, stdev=803.53 00:19:27.192 clat (usec): min=1150, max=54812, avg=15198.65, stdev=9954.40 00:19:27.192 lat (usec): min=1159, max=54820, avg=15307.80, stdev=10017.01 00:19:27.192 clat percentiles (usec): 00:19:27.192 | 1.00th=[ 3490], 5.00th=[ 5014], 10.00th=[ 6194], 20.00th=[ 7767], 00:19:27.192 | 30.00th=[ 9503], 40.00th=[11076], 50.00th=[12125], 60.00th=[13829], 00:19:27.192 | 70.00th=[16057], 80.00th=[20841], 90.00th=[27132], 95.00th=[37487], 00:19:27.192 | 99.00th=[51643], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:19:27.192 | 99.99th=[54789] 00:19:27.192 bw ( KiB/s): min=16896, max=19968, per=35.97%, avg=18432.00, stdev=2172.23, samples=2 00:19:27.192 iops : min= 4224, max= 4992, avg=4608.00, stdev=543.06, samples=2 00:19:27.192 lat (msec) : 2=0.12%, 4=1.42%, 10=32.15%, 20=51.36%, 50=13.98% 00:19:27.193 lat (msec) : 100=0.97% 00:19:27.193 cpu : usr=2.67%, sys=5.34%, ctx=388, majf=0, minf=1 00:19:27.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:27.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.193 issued rwts: total=4608,4648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.193 job1: (groupid=0, jobs=1): err= 0: pid=3291660: Tue Jun 11 15:04:45 2024 00:19:27.193 read: IOPS=3489, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1014msec) 00:19:27.193 slat (nsec): min=1533, max=28714k, avg=147041.05, stdev=1255514.39 00:19:27.193 clat (usec): min=1892, max=71704, avg=20057.68, stdev=10068.22 00:19:27.193 lat (usec): min=1896, max=73617, avg=20204.72, stdev=10157.76 00:19:27.193 clat percentiles (usec): 00:19:27.193 | 1.00th=[ 2311], 5.00th=[ 8979], 10.00th=[10683], 20.00th=[12518], 00:19:27.193 | 30.00th=[13829], 40.00th=[15401], 50.00th=[16909], 60.00th=[20317], 00:19:27.193 | 70.00th=[23200], 80.00th=[27919], 90.00th=[33817], 95.00th=[39584], 00:19:27.193 | 99.00th=[53740], 99.50th=[53740], 99.90th=[71828], 99.95th=[71828], 00:19:27.193 | 99.99th=[71828] 00:19:27.193 write: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec); 0 zone resets 00:19:27.193 slat (usec): min=2, max=19207, avg=124.66, stdev=1059.77 00:19:27.193 clat (usec): min=1340, max=40006, avg=15891.72, stdev=6941.70 00:19:27.193 lat (usec): min=1348, max=40095, avg=16016.38, stdev=6982.09 00:19:27.193 clat percentiles (usec): 00:19:27.193 | 1.00th=[ 4490], 5.00th=[ 7767], 10.00th=[ 9241], 20.00th=[10683], 00:19:27.193 | 30.00th=[11994], 40.00th=[13435], 50.00th=[13960], 60.00th=[15795], 00:19:27.193 | 70.00th=[17957], 80.00th=[19530], 90.00th=[23725], 95.00th=[33162], 00:19:27.193 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:19:27.193 | 99.99th=[40109] 00:19:27.193 bw ( KiB/s): min=12056, max=16616, per=27.98%, avg=14336.00, stdev=3224.41, samples=2 00:19:27.193 iops : min= 3014, max= 4154, avg=3584.00, stdev=806.10, samples=2 00:19:27.193 lat (msec) : 2=0.44%, 4=0.53%, 10=11.18%, 20=57.29%, 50=29.58% 00:19:27.193 lat (msec) : 100=0.98% 00:19:27.193 cpu : usr=2.17%, sys=4.15%, ctx=216, majf=0, minf=1 00:19:27.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:27.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.193 issued rwts: total=3538,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.193 job2: (groupid=0, jobs=1): err= 0: pid=3291685: Tue Jun 11 15:04:45 2024 00:19:27.193 read: IOPS=2507, BW=9.79MiB/s (10.3MB/s)(10.0MiB/1021msec) 00:19:27.193 slat (nsec): min=1624, max=26367k, avg=172741.02, stdev=1415827.71 00:19:27.193 clat (usec): min=9872, max=51260, avg=23828.42, stdev=7738.49 00:19:27.193 lat (usec): min=9879, max=51287, avg=24001.16, stdev=7835.47 00:19:27.193 clat percentiles (usec): 00:19:27.193 | 1.00th=[10028], 5.00th=[13960], 10.00th=[14615], 20.00th=[16057], 00:19:27.193 | 30.00th=[18220], 40.00th=[21365], 50.00th=[23462], 60.00th=[24773], 00:19:27.193 | 70.00th=[27657], 80.00th=[30016], 90.00th=[34866], 95.00th=[36963], 00:19:27.193 | 99.00th=[44827], 99.50th=[45351], 99.90th=[49546], 99.95th=[50070], 00:19:27.193 | 99.99th=[51119] 00:19:27.193 write: IOPS=2740, BW=10.7MiB/s (11.2MB/s)(10.9MiB/1021msec); 0 zone resets 00:19:27.193 slat (usec): min=3, max=18675, avg=187.98, stdev=1321.61 00:19:27.193 clat (msec): min=7, max=139, avg=24.28, stdev=21.50 00:19:27.193 lat (msec): min=7, max=139, avg=24.46, stdev=21.61 00:19:27.193 clat percentiles (msec): 00:19:27.193 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:19:27.193 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 21], 00:19:27.193 | 70.00th=[ 24], 80.00th=[ 28], 90.00th=[ 34], 95.00th=[ 69], 00:19:27.193 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:19:27.193 | 99.99th=[ 140] 00:19:27.193 bw ( KiB/s): min= 7952, max=13408, per=20.84%, avg=10680.00, stdev=3857.97, samples=2 00:19:27.193 iops : min= 1988, max= 3352, avg=2670.00, stdev=964.49, samples=2 00:19:27.193 lat (msec) : 10=1.12%, 20=45.00%, 50=50.58%, 100=1.68%, 250=1.62% 00:19:27.193 cpu : usr=2.65%, sys=3.63%, ctx=176, majf=0, minf=1 00:19:27.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:27.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.193 issued rwts: total=2560,2798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.193 job3: (groupid=0, jobs=1): err= 0: pid=3291692: Tue Jun 11 15:04:45 2024 00:19:27.193 read: IOPS=1830, BW=7323KiB/s (7499kB/s)(7440KiB/1016msec) 00:19:27.193 slat (usec): min=2, max=50342, avg=288.17, stdev=2697.42 00:19:27.193 clat (msec): min=3, max=131, avg=37.51, stdev=26.41 00:19:27.193 lat (msec): min=8, max=131, avg=37.80, stdev=26.69 00:19:27.193 clat percentiles (msec): 00:19:27.193 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 15], 00:19:27.193 | 30.00th=[ 17], 40.00th=[ 17], 50.00th=[ 24], 60.00th=[ 39], 00:19:27.193 | 70.00th=[ 51], 80.00th=[ 68], 90.00th=[ 78], 95.00th=[ 82], 00:19:27.193 | 99.00th=[ 103], 99.50th=[ 103], 99.90th=[ 128], 99.95th=[ 132], 00:19:27.193 | 99.99th=[ 132] 00:19:27.193 write: IOPS=2015, BW=8063KiB/s (8257kB/s)(8192KiB/1016msec); 0 zone resets 00:19:27.193 slat (usec): min=3, max=39944, avg=216.63, stdev=1971.01 00:19:27.193 clat (usec): min=1138, max=83687, avg=28749.59, stdev=18069.33 00:19:27.193 lat (usec): min=1146, max=83752, avg=28966.22, stdev=18248.47 00:19:27.193 clat percentiles (usec): 00:19:27.193 | 1.00th=[ 3392], 5.00th=[ 6718], 10.00th=[10552], 20.00th=[12649], 00:19:27.193 | 30.00th=[17171], 40.00th=[17957], 50.00th=[21890], 60.00th=[27132], 00:19:27.193 | 70.00th=[37487], 80.00th=[48497], 90.00th=[53740], 95.00th=[65799], 00:19:27.193 | 99.00th=[74974], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:19:27.193 | 99.99th=[83362] 00:19:27.193 bw ( KiB/s): min= 4096, max=12288, per=15.99%, avg=8192.00, stdev=5792.62, samples=2 00:19:27.193 iops : min= 1024, max= 3072, avg=2048.00, stdev=1448.15, samples=2 00:19:27.193 lat (msec) : 2=0.08%, 4=0.67%, 10=4.22%, 20=43.04%, 50=28.86% 00:19:27.193 lat (msec) : 100=21.44%, 250=1.69% 00:19:27.193 cpu : usr=1.38%, sys=2.27%, ctx=125, majf=0, minf=1 00:19:27.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:27.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.193 issued rwts: total=1860,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.193 00:19:27.193 Run status group 0 (all jobs): 00:19:27.193 READ: bw=48.1MiB/s (50.4MB/s), 7323KiB/s-17.8MiB/s (7499kB/s-18.6MB/s), io=49.1MiB (51.5MB), run=1013-1021msec 00:19:27.193 WRITE: bw=50.0MiB/s (52.5MB/s), 8063KiB/s-17.9MiB/s (8257kB/s-18.8MB/s), io=51.1MiB (53.6MB), run=1013-1021msec 00:19:27.193 00:19:27.193 Disk stats (read/write): 00:19:27.193 nvme0n1: ios=3810/4096, merge=0/0, ticks=33574/47536, in_queue=81110, util=90.48% 00:19:27.193 nvme0n2: ios=2858/3072, merge=0/0, ticks=45007/40561, in_queue=85568, util=97.14% 00:19:27.193 nvme0n3: ios=2350/2560, merge=0/0, ticks=47048/39417, in_queue=86465, util=100.00% 00:19:27.193 nvme0n4: ios=1557/1867, merge=0/0, ticks=35391/31598, in_queue=66989, util=95.46% 00:19:27.193 15:04:45 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:27.193 [global] 00:19:27.193 thread=1 00:19:27.193 invalidate=1 00:19:27.193 rw=randwrite 00:19:27.193 time_based=1 00:19:27.193 runtime=1 00:19:27.193 ioengine=libaio 00:19:27.193 direct=1 00:19:27.193 bs=4096 00:19:27.193 iodepth=128 00:19:27.193 norandommap=0 00:19:27.193 numjobs=1 00:19:27.193 00:19:27.193 verify_dump=1 00:19:27.193 verify_backlog=512 00:19:27.193 verify_state_save=0 00:19:27.193 do_verify=1 00:19:27.193 verify=crc32c-intel 00:19:27.193 [job0] 00:19:27.193 filename=/dev/nvme0n1 00:19:27.193 [job1] 00:19:27.193 filename=/dev/nvme0n2 00:19:27.193 [job2] 00:19:27.193 filename=/dev/nvme0n3 00:19:27.193 [job3] 00:19:27.193 filename=/dev/nvme0n4 00:19:27.193 Could not set queue depth (nvme0n1) 00:19:27.193 Could not set queue depth (nvme0n2) 00:19:27.193 Could not set queue depth (nvme0n3) 00:19:27.193 Could not set queue depth (nvme0n4) 00:19:27.452 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:27.452 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:27.452 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:27.452 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:27.452 fio-3.35 00:19:27.452 Starting 4 threads 00:19:28.889 00:19:28.889 job0: (groupid=0, jobs=1): err= 0: pid=3292107: Tue Jun 11 15:04:47 2024 00:19:28.889 read: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec) 00:19:28.889 slat (nsec): min=1996, max=31195k, avg=289150.53, stdev=1784904.10 00:19:28.889 clat (msec): min=12, max=154, avg=31.05, stdev=21.17 00:19:28.889 lat (msec): min=12, max=154, avg=31.34, stdev=21.38 00:19:28.889 clat percentiles (msec): 00:19:28.889 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 22], 00:19:28.889 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:19:28.889 | 70.00th=[ 30], 80.00th=[ 32], 90.00th=[ 37], 95.00th=[ 83], 00:19:28.889 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 155], 99.95th=[ 155], 00:19:28.889 | 99.99th=[ 155] 00:19:28.889 write: IOPS=1867, BW=7469KiB/s (7649kB/s)(7544KiB/1010msec); 0 zone resets 00:19:28.889 slat (usec): min=3, max=39731, avg=289.37, stdev=1754.25 00:19:28.889 clat (msec): min=3, max=154, avg=42.65, stdev=36.17 00:19:28.889 lat (msec): min=3, max=154, avg=42.93, stdev=36.40 00:19:28.889 clat percentiles (msec): 00:19:28.889 | 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 21], 00:19:28.889 | 30.00th=[ 22], 40.00th=[ 25], 50.00th=[ 28], 60.00th=[ 29], 00:19:28.889 | 70.00th=[ 32], 80.00th=[ 84], 90.00th=[ 106], 95.00th=[ 127], 00:19:28.889 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 155], 00:19:28.889 | 99.99th=[ 155] 00:19:28.889 bw ( KiB/s): min= 4992, max= 9072, per=11.70%, avg=7032.00, stdev=2885.00, samples=2 00:19:28.889 iops : min= 1248, max= 2268, avg=1758.00, stdev=721.25, samples=2 00:19:28.889 lat (msec) : 4=0.18%, 10=1.72%, 20=12.45%, 50=68.61%, 100=9.03% 00:19:28.889 lat (msec) : 250=8.01% 00:19:28.889 cpu : usr=1.98%, sys=2.58%, ctx=190, majf=0, minf=1 00:19:28.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:28.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.889 issued rwts: total=1536,1886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.889 job1: (groupid=0, jobs=1): err= 0: pid=3292119: Tue Jun 11 15:04:47 2024 00:19:28.889 read: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1019msec) 00:19:28.889 slat (usec): min=2, max=31191, avg=173.93, stdev=1399.74 00:19:28.889 clat (usec): min=9757, max=65673, avg=22168.89, stdev=9589.36 00:19:28.889 lat (usec): min=10481, max=65684, avg=22342.81, stdev=9693.20 00:19:28.889 clat percentiles (usec): 00:19:28.889 | 1.00th=[11469], 5.00th=[12649], 10.00th=[13566], 20.00th=[14877], 00:19:28.889 | 30.00th=[16188], 40.00th=[17695], 50.00th=[20055], 60.00th=[21103], 00:19:28.889 | 70.00th=[24511], 80.00th=[27395], 90.00th=[32375], 95.00th=[44303], 00:19:28.889 | 99.00th=[58459], 99.50th=[62129], 99.90th=[65799], 99.95th=[65799], 00:19:28.889 | 99.99th=[65799] 00:19:28.889 write: IOPS=3169, BW=12.4MiB/s (13.0MB/s)(12.6MiB/1019msec); 0 zone resets 00:19:28.889 slat (usec): min=4, max=21224, avg=137.75, stdev=1039.41 00:19:28.889 clat (usec): min=1657, max=65627, avg=18924.69, stdev=8409.71 00:19:28.889 lat (usec): min=1669, max=65633, avg=19062.44, stdev=8457.35 00:19:28.889 clat percentiles (usec): 00:19:28.889 | 1.00th=[ 5473], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[12125], 00:19:28.889 | 30.00th=[13304], 40.00th=[15795], 50.00th=[16450], 60.00th=[18744], 00:19:28.889 | 70.00th=[21103], 80.00th=[27132], 90.00th=[28967], 95.00th=[37487], 00:19:28.889 | 99.00th=[46400], 99.50th=[47973], 99.90th=[53216], 99.95th=[65799], 00:19:28.889 | 99.99th=[65799] 00:19:28.889 bw ( KiB/s): min=12280, max=12536, per=20.64%, avg=12408.00, stdev=181.02, samples=2 00:19:28.889 iops : min= 3070, max= 3134, avg=3102.00, stdev=45.25, samples=2 00:19:28.889 lat (msec) : 2=0.03%, 10=3.98%, 20=53.49%, 50=40.97%, 100=1.52% 00:19:28.889 cpu : usr=3.24%, sys=4.52%, ctx=198, majf=0, minf=1 00:19:28.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:28.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.889 issued rwts: total=3072,3230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.889 job2: (groupid=0, jobs=1): err= 0: pid=3292138: Tue Jun 11 15:04:47 2024 00:19:28.889 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:19:28.889 slat (usec): min=2, max=10360, avg=84.30, stdev=597.72 00:19:28.889 clat (usec): min=4087, max=22027, avg=11785.43, stdev=2687.41 00:19:28.889 lat (usec): min=7212, max=22034, avg=11869.73, stdev=2694.61 00:19:28.889 clat percentiles (usec): 00:19:28.889 | 1.00th=[ 7373], 5.00th=[ 7767], 10.00th=[ 8356], 20.00th=[ 9765], 00:19:28.889 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[12125], 00:19:28.889 | 70.00th=[12649], 80.00th=[13960], 90.00th=[15533], 95.00th=[16712], 00:19:28.889 | 99.00th=[19792], 99.50th=[20579], 99.90th=[21890], 99.95th=[21890], 00:19:28.889 | 99.99th=[22152] 00:19:28.889 write: IOPS=5967, BW=23.3MiB/s (24.4MB/s)(23.5MiB/1009msec); 0 zone resets 00:19:28.889 slat (usec): min=3, max=9248, avg=80.95, stdev=544.56 00:19:28.889 clat (usec): min=1701, max=22015, avg=10222.85, stdev=2690.99 00:19:28.889 lat (usec): min=1714, max=22021, avg=10303.80, stdev=2681.86 00:19:28.889 clat percentiles (usec): 00:19:28.889 | 1.00th=[ 4359], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7767], 00:19:28.889 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10683], 00:19:28.889 | 70.00th=[11863], 80.00th=[12256], 90.00th=[14222], 95.00th=[14877], 00:19:28.889 | 99.00th=[16450], 99.50th=[16909], 99.90th=[19006], 99.95th=[19792], 00:19:28.889 | 99.99th=[21890] 00:19:28.889 bw ( KiB/s): min=22568, max=24576, per=39.21%, avg=23572.00, stdev=1419.87, samples=2 00:19:28.889 iops : min= 5642, max= 6144, avg=5893.00, stdev=354.97, samples=2 00:19:28.889 lat (msec) : 2=0.04%, 4=0.33%, 10=35.39%, 20=63.87%, 50=0.37% 00:19:28.889 cpu : usr=5.36%, sys=8.13%, ctx=401, majf=0, minf=1 00:19:28.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:28.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.889 issued rwts: total=5632,6021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.889 job3: (groupid=0, jobs=1): err= 0: pid=3292145: Tue Jun 11 15:04:47 2024 00:19:28.889 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:19:28.889 slat (usec): min=2, max=13648, avg=124.98, stdev=941.65 00:19:28.889 clat (usec): min=8433, max=29106, avg=16408.52, stdev=3845.12 00:19:28.889 lat (usec): min=8439, max=29109, avg=16533.50, stdev=3895.36 00:19:28.889 clat percentiles (usec): 00:19:28.889 | 1.00th=[11207], 5.00th=[11731], 10.00th=[12256], 20.00th=[13173], 00:19:28.889 | 30.00th=[13960], 40.00th=[14746], 50.00th=[15008], 60.00th=[15926], 00:19:28.889 | 70.00th=[18220], 80.00th=[19792], 90.00th=[22414], 95.00th=[23725], 00:19:28.890 | 99.00th=[26608], 99.50th=[27657], 99.90th=[29230], 99.95th=[29230], 00:19:28.890 | 99.99th=[29230] 00:19:28.890 write: IOPS=4164, BW=16.3MiB/s (17.1MB/s)(16.3MiB/1003msec); 0 zone resets 00:19:28.890 slat (usec): min=3, max=13067, avg=109.67, stdev=780.12 00:19:28.890 clat (usec): min=1668, max=29101, avg=14358.20, stdev=3719.36 00:19:28.890 lat (usec): min=1681, max=29106, avg=14467.87, stdev=3709.63 00:19:28.890 clat percentiles (usec): 00:19:28.890 | 1.00th=[ 6652], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11338], 00:19:28.890 | 30.00th=[12125], 40.00th=[13173], 50.00th=[14222], 60.00th=[15139], 00:19:28.890 | 70.00th=[15795], 80.00th=[16450], 90.00th=[21365], 95.00th=[22414], 00:19:28.890 | 99.00th=[22938], 99.50th=[22938], 99.90th=[28705], 99.95th=[28705], 00:19:28.890 | 99.99th=[29230] 00:19:28.890 bw ( KiB/s): min=16392, max=16432, per=27.30%, avg=16412.00, stdev=28.28, samples=2 00:19:28.890 iops : min= 4098, max= 4108, avg=4103.00, stdev= 7.07, samples=2 00:19:28.890 lat (msec) : 2=0.02%, 4=0.01%, 10=4.34%, 20=81.30%, 50=14.32% 00:19:28.890 cpu : usr=3.89%, sys=5.99%, ctx=290, majf=0, minf=1 00:19:28.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:28.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.890 issued rwts: total=4096,4177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.890 00:19:28.890 Run status group 0 (all jobs): 00:19:28.890 READ: bw=55.0MiB/s (57.6MB/s), 6083KiB/s-21.8MiB/s (6229kB/s-22.9MB/s), io=56.0MiB (58.7MB), run=1003-1019msec 00:19:28.890 WRITE: bw=58.7MiB/s (61.6MB/s), 7469KiB/s-23.3MiB/s (7649kB/s-24.4MB/s), io=59.8MiB (62.7MB), run=1003-1019msec 00:19:28.890 00:19:28.890 Disk stats (read/write): 00:19:28.890 nvme0n1: ios=1047/1527, merge=0/0, ticks=25938/71911, in_queue=97849, util=89.58% 00:19:28.890 nvme0n2: ios=2611/2800, merge=0/0, ticks=56039/47371, in_queue=103410, util=90.76% 00:19:28.890 nvme0n3: ios=4642/5120, merge=0/0, ticks=53999/50245, in_queue=104244, util=93.31% 00:19:28.890 nvme0n4: ios=3384/3584, merge=0/0, ticks=55061/50519, in_queue=105580, util=95.99% 00:19:28.890 15:04:47 -- target/fio.sh@55 -- # sync 00:19:28.890 15:04:47 -- target/fio.sh@59 -- # fio_pid=3292248 00:19:28.890 15:04:47 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:28.890 15:04:47 -- target/fio.sh@61 -- # sleep 3 00:19:28.890 [global] 00:19:28.890 thread=1 00:19:28.890 invalidate=1 00:19:28.890 rw=read 00:19:28.890 time_based=1 00:19:28.890 runtime=10 00:19:28.890 ioengine=libaio 00:19:28.890 direct=1 00:19:28.890 bs=4096 00:19:28.890 iodepth=1 00:19:28.890 norandommap=1 00:19:28.890 numjobs=1 00:19:28.890 00:19:28.890 [job0] 00:19:28.890 filename=/dev/nvme0n1 00:19:28.890 [job1] 00:19:28.890 filename=/dev/nvme0n2 00:19:28.890 [job2] 00:19:28.890 filename=/dev/nvme0n3 00:19:28.890 [job3] 00:19:28.890 filename=/dev/nvme0n4 00:19:28.890 Could not set queue depth (nvme0n1) 00:19:28.890 Could not set queue depth (nvme0n2) 00:19:28.890 Could not set queue depth (nvme0n3) 00:19:28.890 Could not set queue depth (nvme0n4) 00:19:29.171 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.171 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.171 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.171 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.171 fio-3.35 00:19:29.171 Starting 4 threads 00:19:31.706 15:04:50 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:31.964 15:04:50 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:31.964 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=270336, buflen=4096 00:19:31.964 fio: pid=3292616, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:31.964 15:04:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.964 15:04:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:32.223 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=294912, buflen=4096 00:19:32.223 fio: pid=3292615, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:32.223 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=22876160, buflen=4096 00:19:32.223 fio: pid=3292593, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:32.223 15:04:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.223 15:04:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:32.482 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=23244800, buflen=4096 00:19:32.482 fio: pid=3292606, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:32.482 15:04:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.482 15:04:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:32.482 00:19:32.482 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3292593: Tue Jun 11 15:04:51 2024 00:19:32.482 read: IOPS=1763, BW=7054KiB/s (7223kB/s)(21.8MiB/3167msec) 00:19:32.482 slat (usec): min=6, max=13034, avg=15.95, stdev=309.74 00:19:32.482 clat (usec): min=336, max=41578, avg=545.37, stdev=1340.26 00:19:32.482 lat (usec): min=344, max=41586, avg=559.30, stdev=1367.98 00:19:32.482 clat percentiles (usec): 00:19:32.482 | 1.00th=[ 367], 5.00th=[ 408], 10.00th=[ 424], 20.00th=[ 441], 00:19:32.482 | 30.00th=[ 482], 40.00th=[ 498], 50.00th=[ 506], 60.00th=[ 519], 00:19:32.482 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 562], 95.00th=[ 578], 00:19:32.482 | 99.00th=[ 668], 99.50th=[ 725], 99.90th=[41157], 99.95th=[41681], 00:19:32.482 | 99.99th=[41681] 00:19:32.482 bw ( KiB/s): min= 6336, max= 8124, per=52.07%, avg=7118.00, stdev=701.84, samples=6 00:19:32.482 iops : min= 1584, max= 2031, avg=1779.50, stdev=175.46, samples=6 00:19:32.482 lat (usec) : 500=41.96%, 750=57.63%, 1000=0.23% 00:19:32.482 lat (msec) : 2=0.04%, 4=0.02%, 50=0.11% 00:19:32.482 cpu : usr=0.88%, sys=2.37%, ctx=5591, majf=0, minf=1 00:19:32.482 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:32.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.482 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.482 issued rwts: total=5586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.482 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:32.483 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3292606: Tue Jun 11 15:04:51 2024 00:19:32.483 read: IOPS=1701, BW=6807KiB/s (6970kB/s)(22.2MiB/3335msec) 00:19:32.483 slat (usec): min=7, max=12445, avg=15.35, stdev=267.88 00:19:32.483 clat (usec): min=282, max=41534, avg=566.45, stdev=1809.35 00:19:32.483 lat (usec): min=291, max=41542, avg=581.80, stdev=1829.01 00:19:32.483 clat percentiles (usec): 00:19:32.483 | 1.00th=[ 322], 5.00th=[ 371], 10.00th=[ 424], 20.00th=[ 445], 00:19:32.483 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 482], 60.00th=[ 494], 00:19:32.483 | 70.00th=[ 506], 80.00th=[ 519], 90.00th=[ 537], 95.00th=[ 570], 00:19:32.483 | 99.00th=[ 668], 99.50th=[ 725], 99.90th=[41157], 99.95th=[41157], 00:19:32.483 | 99.99th=[41681] 00:19:32.483 bw ( KiB/s): min= 4680, max= 8240, per=49.49%, avg=6765.00, stdev=1369.74, samples=6 00:19:32.483 iops : min= 1170, max= 2060, avg=1691.17, stdev=342.34, samples=6 00:19:32.483 lat (usec) : 500=64.31%, 750=35.24%, 1000=0.14% 00:19:32.483 lat (msec) : 2=0.02%, 4=0.05%, 20=0.02%, 50=0.21% 00:19:32.483 cpu : usr=1.08%, sys=2.73%, ctx=5683, majf=0, minf=1 00:19:32.483 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:32.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.483 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.483 issued rwts: total=5676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.483 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:32.483 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3292615: Tue Jun 11 15:04:51 2024 00:19:32.483 read: IOPS=24, BW=98.0KiB/s (100kB/s)(288KiB/2938msec) 00:19:32.483 slat (nsec): min=18370, max=30611, avg=21978.23, stdev=1552.50 00:19:32.483 clat (usec): min=875, max=41979, avg=40489.87, stdev=4742.00 00:19:32.483 lat (usec): min=905, max=42001, avg=40511.84, stdev=4740.97 00:19:32.483 clat percentiles (usec): 00:19:32.483 | 1.00th=[ 873], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:32.483 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:32.483 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:19:32.483 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:32.483 | 99.99th=[42206] 00:19:32.483 bw ( KiB/s): min= 96, max= 104, per=0.71%, avg=97.60, stdev= 3.58, samples=5 00:19:32.483 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:19:32.483 lat (usec) : 1000=1.37% 00:19:32.483 lat (msec) : 50=97.26% 00:19:32.483 cpu : usr=0.07%, sys=0.00%, ctx=73, majf=0, minf=1 00:19:32.483 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:32.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.483 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.483 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.483 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:32.483 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3292616: Tue Jun 11 15:04:51 2024 00:19:32.483 read: IOPS=24, BW=97.6KiB/s (99.9kB/s)(264KiB/2705msec) 00:19:32.483 slat (nsec): min=8345, max=35528, avg=21918.15, stdev=3542.98 00:19:32.483 clat (usec): min=799, max=42952, avg=40491.62, stdev=4974.79 00:19:32.483 lat (usec): min=835, max=42974, avg=40513.53, stdev=4973.10 00:19:32.483 clat percentiles (usec): 00:19:32.483 | 1.00th=[ 799], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:32.483 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:32.483 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:19:32.483 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:32.483 | 99.99th=[42730] 00:19:32.483 bw ( KiB/s): min= 96, max= 104, per=0.71%, avg=97.60, stdev= 3.58, samples=5 00:19:32.483 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:19:32.483 lat (usec) : 1000=1.49% 00:19:32.483 lat (msec) : 50=97.01% 00:19:32.483 cpu : usr=0.07%, sys=0.00%, ctx=67, majf=0, minf=2 00:19:32.483 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:32.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.483 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.483 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.483 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:32.483 00:19:32.483 Run status group 0 (all jobs): 00:19:32.483 READ: bw=13.3MiB/s (14.0MB/s), 97.6KiB/s-7054KiB/s (99.9kB/s-7223kB/s), io=44.5MiB (46.7MB), run=2705-3335msec 00:19:32.483 00:19:32.483 Disk stats (read/write): 00:19:32.483 nvme0n1: ios=5495/0, merge=0/0, ticks=2946/0, in_queue=2946, util=94.30% 00:19:32.483 nvme0n2: ios=5710/0, merge=0/0, ticks=4099/0, in_queue=4099, util=98.73% 00:19:32.483 nvme0n3: ios=69/0, merge=0/0, ticks=2793/0, in_queue=2793, util=96.33% 00:19:32.483 nvme0n4: ios=63/0, merge=0/0, ticks=2551/0, in_queue=2551, util=96.41% 00:19:32.741 15:04:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.741 15:04:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:32.741 15:04:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.741 15:04:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:32.998 15:04:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.998 15:04:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:33.256 15:04:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.256 15:04:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:33.514 15:04:52 -- target/fio.sh@69 -- # fio_status=0 00:19:33.514 15:04:52 -- target/fio.sh@70 -- # wait 3292248 00:19:33.514 15:04:52 -- target/fio.sh@70 -- # fio_status=4 00:19:33.514 15:04:52 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:33.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.772 15:04:52 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:33.772 15:04:52 -- common/autotest_common.sh@1198 -- # local i=0 00:19:33.772 15:04:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:33.772 15:04:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:33.772 15:04:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:33.772 15:04:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:33.772 15:04:52 -- common/autotest_common.sh@1210 -- # return 0 00:19:33.772 15:04:52 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:33.772 15:04:52 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:33.772 nvmf hotplug test: fio failed as expected 00:19:33.772 15:04:52 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:34.031 15:04:52 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:34.031 15:04:52 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:34.031 15:04:52 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:34.031 15:04:52 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:34.031 15:04:52 -- target/fio.sh@91 -- # nvmftestfini 00:19:34.031 15:04:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:34.031 15:04:52 -- nvmf/common.sh@116 -- # sync 00:19:34.031 15:04:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:34.031 15:04:52 -- nvmf/common.sh@119 -- # set +e 00:19:34.031 15:04:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:34.031 15:04:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:34.031 rmmod nvme_tcp 00:19:34.031 rmmod nvme_fabrics 00:19:34.031 rmmod nvme_keyring 00:19:34.031 15:04:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:34.031 15:04:52 -- nvmf/common.sh@123 -- # set -e 00:19:34.031 15:04:52 -- nvmf/common.sh@124 -- # return 0 00:19:34.031 15:04:52 -- nvmf/common.sh@477 -- # '[' -n 3289058 ']' 00:19:34.031 15:04:52 -- nvmf/common.sh@478 -- # killprocess 3289058 00:19:34.031 15:04:52 -- common/autotest_common.sh@926 -- # '[' -z 3289058 ']' 00:19:34.031 15:04:52 -- common/autotest_common.sh@930 -- # kill -0 3289058 00:19:34.031 15:04:52 -- common/autotest_common.sh@931 -- # uname 00:19:34.031 15:04:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:34.031 15:04:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3289058 00:19:34.031 15:04:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:34.031 15:04:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:34.031 15:04:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3289058' 00:19:34.031 killing process with pid 3289058 00:19:34.031 15:04:52 -- common/autotest_common.sh@945 -- # kill 3289058 00:19:34.031 15:04:52 -- common/autotest_common.sh@950 -- # wait 3289058 00:19:34.290 15:04:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:34.290 15:04:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:34.290 15:04:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:34.290 15:04:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:34.290 15:04:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:34.290 15:04:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.290 15:04:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.290 15:04:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.826 15:04:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:36.826 00:19:36.826 real 0m29.150s 00:19:36.826 user 2m22.883s 00:19:36.826 sys 0m8.722s 00:19:36.826 15:04:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:36.826 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:19:36.826 ************************************ 00:19:36.826 END TEST nvmf_fio_target 00:19:36.826 ************************************ 00:19:36.826 15:04:55 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:36.826 15:04:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:36.826 15:04:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:36.826 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:19:36.826 ************************************ 00:19:36.826 START TEST nvmf_bdevio 00:19:36.826 ************************************ 00:19:36.826 15:04:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:36.826 * Looking for test storage... 00:19:36.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:36.826 15:04:55 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:36.826 15:04:55 -- nvmf/common.sh@7 -- # uname -s 00:19:36.826 15:04:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.826 15:04:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.826 15:04:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.826 15:04:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.826 15:04:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.826 15:04:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.826 15:04:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.826 15:04:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.826 15:04:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.826 15:04:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.826 15:04:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:36.826 15:04:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:36.826 15:04:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.826 15:04:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.826 15:04:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:36.826 15:04:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:36.826 15:04:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.826 15:04:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.826 15:04:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.826 15:04:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.826 15:04:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.826 15:04:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.826 15:04:55 -- paths/export.sh@5 -- # export PATH 00:19:36.826 15:04:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.826 15:04:55 -- nvmf/common.sh@46 -- # : 0 00:19:36.826 15:04:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:36.826 15:04:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:36.826 15:04:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:36.826 15:04:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.826 15:04:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.826 15:04:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:36.826 15:04:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:36.826 15:04:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:36.826 15:04:55 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:36.826 15:04:55 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:36.826 15:04:55 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:36.826 15:04:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:36.826 15:04:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.826 15:04:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:36.826 15:04:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:36.826 15:04:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:36.826 15:04:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.826 15:04:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.827 15:04:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.827 15:04:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:36.827 15:04:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:36.827 15:04:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:36.827 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:19:43.393 15:05:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:43.393 15:05:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:43.393 15:05:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:43.393 15:05:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:43.393 15:05:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:43.393 15:05:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:43.393 15:05:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:43.393 15:05:01 -- nvmf/common.sh@294 -- # net_devs=() 00:19:43.393 15:05:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:43.393 15:05:01 -- nvmf/common.sh@295 -- # e810=() 00:19:43.393 15:05:01 -- nvmf/common.sh@295 -- # local -ga e810 00:19:43.393 15:05:01 -- nvmf/common.sh@296 -- # x722=() 00:19:43.393 15:05:01 -- nvmf/common.sh@296 -- # local -ga x722 00:19:43.393 15:05:01 -- nvmf/common.sh@297 -- # mlx=() 00:19:43.393 15:05:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:43.393 15:05:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.393 15:05:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:43.393 15:05:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:43.393 15:05:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:43.393 15:05:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.393 15:05:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:43.393 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:43.393 15:05:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.393 15:05:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:43.393 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:43.393 15:05:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:43.393 15:05:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.393 15:05:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.393 15:05:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.393 15:05:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.393 15:05:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:43.393 Found net devices under 0000:af:00.0: cvl_0_0 00:19:43.393 15:05:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.393 15:05:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.393 15:05:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.393 15:05:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.393 15:05:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.393 15:05:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:43.393 Found net devices under 0000:af:00.1: cvl_0_1 00:19:43.393 15:05:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.393 15:05:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:43.393 15:05:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:43.393 15:05:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:43.393 15:05:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.393 15:05:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.393 15:05:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.393 15:05:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:43.393 15:05:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.393 15:05:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.393 15:05:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:43.393 15:05:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.393 15:05:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.393 15:05:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:43.393 15:05:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:43.393 15:05:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.393 15:05:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.393 15:05:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.393 15:05:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.393 15:05:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:43.393 15:05:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.393 15:05:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.393 15:05:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.393 15:05:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:43.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:19:43.393 00:19:43.393 --- 10.0.0.2 ping statistics --- 00:19:43.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.393 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:19:43.393 15:05:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:19:43.393 00:19:43.393 --- 10.0.0.1 ping statistics --- 00:19:43.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.393 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:19:43.393 15:05:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.393 15:05:01 -- nvmf/common.sh@410 -- # return 0 00:19:43.393 15:05:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:43.393 15:05:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.393 15:05:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:43.393 15:05:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.393 15:05:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:43.393 15:05:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:43.393 15:05:01 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:43.393 15:05:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:43.393 15:05:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:43.393 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:19:43.393 15:05:01 -- nvmf/common.sh@469 -- # nvmfpid=3297471 00:19:43.393 15:05:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:43.393 15:05:01 -- nvmf/common.sh@470 -- # waitforlisten 3297471 00:19:43.393 15:05:01 -- common/autotest_common.sh@819 -- # '[' -z 3297471 ']' 00:19:43.393 15:05:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.393 15:05:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:43.393 15:05:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.393 15:05:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:43.393 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:19:43.393 [2024-06-11 15:05:01.501579] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:43.393 [2024-06-11 15:05:01.501618] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.393 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.393 [2024-06-11 15:05:01.580201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:43.393 [2024-06-11 15:05:01.668500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:43.393 [2024-06-11 15:05:01.668646] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.393 [2024-06-11 15:05:01.668658] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.393 [2024-06-11 15:05:01.668667] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.393 [2024-06-11 15:05:01.668723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:43.393 [2024-06-11 15:05:01.668817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:43.393 [2024-06-11 15:05:01.668931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:43.393 [2024-06-11 15:05:01.668931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.652 15:05:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:43.652 15:05:02 -- common/autotest_common.sh@852 -- # return 0 00:19:43.652 15:05:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:43.652 15:05:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:43.652 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:43.652 15:05:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.652 15:05:02 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:43.652 15:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.652 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:43.652 [2024-06-11 15:05:02.428606] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.652 15:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.652 15:05:02 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:43.652 15:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.652 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:43.652 Malloc0 00:19:43.652 15:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.652 15:05:02 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:43.652 15:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.652 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:43.652 15:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.652 15:05:02 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:43.652 15:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.652 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:43.652 15:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.652 15:05:02 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.652 15:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.652 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:43.652 [2024-06-11 15:05:02.476058] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.652 15:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.652 15:05:02 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:43.652 15:05:02 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:43.652 15:05:02 -- nvmf/common.sh@520 -- # config=() 00:19:43.652 15:05:02 -- nvmf/common.sh@520 -- # local subsystem config 00:19:43.652 15:05:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:43.652 15:05:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:43.652 { 00:19:43.652 "params": { 00:19:43.652 "name": "Nvme$subsystem", 00:19:43.652 "trtype": "$TEST_TRANSPORT", 00:19:43.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.652 "adrfam": "ipv4", 00:19:43.652 "trsvcid": "$NVMF_PORT", 00:19:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.652 "hdgst": ${hdgst:-false}, 00:19:43.652 "ddgst": ${ddgst:-false} 00:19:43.652 }, 00:19:43.652 "method": "bdev_nvme_attach_controller" 00:19:43.652 } 00:19:43.652 EOF 00:19:43.652 )") 00:19:43.652 15:05:02 -- nvmf/common.sh@542 -- # cat 00:19:43.652 15:05:02 -- nvmf/common.sh@544 -- # jq . 00:19:43.652 15:05:02 -- nvmf/common.sh@545 -- # IFS=, 00:19:43.652 15:05:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:43.652 "params": { 00:19:43.652 "name": "Nvme1", 00:19:43.652 "trtype": "tcp", 00:19:43.652 "traddr": "10.0.0.2", 00:19:43.652 "adrfam": "ipv4", 00:19:43.652 "trsvcid": "4420", 00:19:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.652 "hdgst": false, 00:19:43.652 "ddgst": false 00:19:43.652 }, 00:19:43.652 "method": "bdev_nvme_attach_controller" 00:19:43.652 }' 00:19:43.910 [2024-06-11 15:05:02.521694] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:43.910 [2024-06-11 15:05:02.521747] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297666 ] 00:19:43.910 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.910 [2024-06-11 15:05:02.608899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:43.910 [2024-06-11 15:05:02.694430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.910 [2024-06-11 15:05:02.694532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.910 [2024-06-11 15:05:02.694532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.168 [2024-06-11 15:05:02.847794] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:44.168 [2024-06-11 15:05:02.847831] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:44.168 I/O targets: 00:19:44.168 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:44.168 00:19:44.168 00:19:44.168 CUnit - A unit testing framework for C - Version 2.1-3 00:19:44.168 http://cunit.sourceforge.net/ 00:19:44.168 00:19:44.168 00:19:44.168 Suite: bdevio tests on: Nvme1n1 00:19:44.168 Test: blockdev write read block ...passed 00:19:44.168 Test: blockdev write zeroes read block ...passed 00:19:44.168 Test: blockdev write zeroes read no split ...passed 00:19:44.168 Test: blockdev write zeroes read split ...passed 00:19:44.426 Test: blockdev write zeroes read split partial ...passed 00:19:44.426 Test: blockdev reset ...[2024-06-11 15:05:03.069053] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:44.426 [2024-06-11 15:05:03.069122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2194ad0 (9): Bad file descriptor 00:19:44.426 [2024-06-11 15:05:03.138337] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:44.426 passed 00:19:44.426 Test: blockdev write read 8 blocks ...passed 00:19:44.426 Test: blockdev write read size > 128k ...passed 00:19:44.426 Test: blockdev write read invalid size ...passed 00:19:44.426 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:44.426 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:44.426 Test: blockdev write read max offset ...passed 00:19:44.684 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:44.684 Test: blockdev writev readv 8 blocks ...passed 00:19:44.684 Test: blockdev writev readv 30 x 1block ...passed 00:19:44.684 Test: blockdev writev readv block ...passed 00:19:44.684 Test: blockdev writev readv size > 128k ...passed 00:19:44.684 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:44.685 Test: blockdev comparev and writev ...[2024-06-11 15:05:03.318316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.685 [2024-06-11 15:05:03.318343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.685 [2024-06-11 15:05:03.318355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.685 [2024-06-11 15:05:03.318361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:44.685 [2024-06-11 15:05:03.318763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.685 [2024-06-11 15:05:03.318773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:44.685 [2024-06-11 15:05:03.318783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.685 [2024-06-11 15:05:03.318789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:44.685 [2024-06-11 15:05:03.319186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.685 [2024-06-11 15:05:03.319196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:44.685 [2024-06-11 15:05:03.319205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.685 [2024-06-11 15:05:03.319212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:44.685 [2024-06-11 15:05:03.319604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.685 [2024-06-11 15:05:03.319613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:44.685 [2024-06-11 15:05:03.319623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.685 [2024-06-11 15:05:03.319630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:44.685 passed 00:19:44.685 Test: blockdev nvme passthru rw ...passed 00:19:44.685 Test: blockdev nvme passthru vendor specific ...[2024-06-11 15:05:03.403574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.685 [2024-06-11 15:05:03.403591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:44.685 [2024-06-11 15:05:03.403814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.685 [2024-06-11 15:05:03.403826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:44.685 [2024-06-11 15:05:03.404051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.685 [2024-06-11 15:05:03.404060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:44.685 [2024-06-11 15:05:03.404282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.685 [2024-06-11 15:05:03.404291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:44.685 passed 00:19:44.685 Test: blockdev nvme admin passthru ...passed 00:19:44.685 Test: blockdev copy ...passed 00:19:44.685 00:19:44.685 Run Summary: Type Total Ran Passed Failed Inactive 00:19:44.685 suites 1 1 n/a 0 0 00:19:44.685 tests 23 23 23 0 0 00:19:44.685 asserts 152 152 152 0 n/a 00:19:44.685 00:19:44.685 Elapsed time = 1.228 seconds 00:19:44.943 15:05:03 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.943 15:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.943 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:19:44.943 15:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.943 15:05:03 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:44.943 15:05:03 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:44.943 15:05:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:44.943 15:05:03 -- nvmf/common.sh@116 -- # sync 00:19:44.943 15:05:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:44.943 15:05:03 -- nvmf/common.sh@119 -- # set +e 00:19:44.943 15:05:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:44.943 15:05:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:44.943 rmmod nvme_tcp 00:19:44.943 rmmod nvme_fabrics 00:19:44.943 rmmod nvme_keyring 00:19:44.943 15:05:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:44.943 15:05:03 -- nvmf/common.sh@123 -- # set -e 00:19:44.943 15:05:03 -- nvmf/common.sh@124 -- # return 0 00:19:44.943 15:05:03 -- nvmf/common.sh@477 -- # '[' -n 3297471 ']' 00:19:44.943 15:05:03 -- nvmf/common.sh@478 -- # killprocess 3297471 00:19:44.943 15:05:03 -- common/autotest_common.sh@926 -- # '[' -z 3297471 ']' 00:19:44.943 15:05:03 -- common/autotest_common.sh@930 -- # kill -0 3297471 00:19:44.943 15:05:03 -- common/autotest_common.sh@931 -- # uname 00:19:44.943 15:05:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:44.943 15:05:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3297471 00:19:44.943 15:05:03 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:44.943 15:05:03 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:44.943 15:05:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3297471' 00:19:44.943 killing process with pid 3297471 00:19:44.943 15:05:03 -- common/autotest_common.sh@945 -- # kill 3297471 00:19:44.943 15:05:03 -- common/autotest_common.sh@950 -- # wait 3297471 00:19:45.201 15:05:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:45.201 15:05:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:45.201 15:05:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:45.201 15:05:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.201 15:05:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:45.201 15:05:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.201 15:05:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.201 15:05:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.736 15:05:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:47.736 00:19:47.736 real 0m10.978s 00:19:47.736 user 0m13.166s 00:19:47.736 sys 0m5.209s 00:19:47.736 15:05:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.736 15:05:06 -- common/autotest_common.sh@10 -- # set +x 00:19:47.736 ************************************ 00:19:47.736 END TEST nvmf_bdevio 00:19:47.736 ************************************ 00:19:47.736 15:05:06 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:47.736 15:05:06 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:47.736 15:05:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:47.736 15:05:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:47.736 15:05:06 -- common/autotest_common.sh@10 -- # set +x 00:19:47.736 ************************************ 00:19:47.736 START TEST nvmf_bdevio_no_huge 00:19:47.736 ************************************ 00:19:47.736 15:05:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:47.736 * Looking for test storage... 00:19:47.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.736 15:05:06 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.736 15:05:06 -- nvmf/common.sh@7 -- # uname -s 00:19:47.736 15:05:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.736 15:05:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.736 15:05:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.736 15:05:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.736 15:05:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.736 15:05:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.736 15:05:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.736 15:05:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.736 15:05:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.736 15:05:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.736 15:05:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:47.736 15:05:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:47.736 15:05:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.736 15:05:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.736 15:05:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.736 15:05:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.736 15:05:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.736 15:05:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.736 15:05:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.736 15:05:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.736 15:05:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.736 15:05:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.736 15:05:06 -- paths/export.sh@5 -- # export PATH 00:19:47.736 15:05:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.736 15:05:06 -- nvmf/common.sh@46 -- # : 0 00:19:47.736 15:05:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.736 15:05:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.736 15:05:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.736 15:05:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.736 15:05:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.736 15:05:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.736 15:05:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.736 15:05:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.736 15:05:06 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:47.736 15:05:06 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:47.736 15:05:06 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:47.736 15:05:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:47.736 15:05:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.736 15:05:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.736 15:05:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.736 15:05:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.736 15:05:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.736 15:05:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.736 15:05:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.736 15:05:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:47.736 15:05:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:47.736 15:05:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:47.736 15:05:06 -- common/autotest_common.sh@10 -- # set +x 00:19:54.306 15:05:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:54.306 15:05:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:54.306 15:05:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:54.306 15:05:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:54.306 15:05:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:54.306 15:05:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:54.306 15:05:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:54.306 15:05:12 -- nvmf/common.sh@294 -- # net_devs=() 00:19:54.306 15:05:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:54.306 15:05:12 -- nvmf/common.sh@295 -- # e810=() 00:19:54.306 15:05:12 -- nvmf/common.sh@295 -- # local -ga e810 00:19:54.306 15:05:12 -- nvmf/common.sh@296 -- # x722=() 00:19:54.306 15:05:12 -- nvmf/common.sh@296 -- # local -ga x722 00:19:54.306 15:05:12 -- nvmf/common.sh@297 -- # mlx=() 00:19:54.306 15:05:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:54.306 15:05:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.306 15:05:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:54.306 15:05:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:54.306 15:05:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:54.306 15:05:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:54.306 15:05:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:54.306 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:54.306 15:05:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:54.306 15:05:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:54.306 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:54.306 15:05:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:54.306 15:05:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:54.306 15:05:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.306 15:05:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:54.306 15:05:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.306 15:05:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:54.306 Found net devices under 0000:af:00.0: cvl_0_0 00:19:54.306 15:05:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.306 15:05:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:54.306 15:05:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.306 15:05:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:54.306 15:05:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.306 15:05:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:54.306 Found net devices under 0000:af:00.1: cvl_0_1 00:19:54.306 15:05:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.306 15:05:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:54.306 15:05:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:54.306 15:05:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:54.306 15:05:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:54.306 15:05:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.306 15:05:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.306 15:05:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.306 15:05:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:54.306 15:05:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.306 15:05:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.306 15:05:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:54.306 15:05:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.306 15:05:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.306 15:05:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:54.306 15:05:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:54.306 15:05:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.306 15:05:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.306 15:05:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.306 15:05:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.306 15:05:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:54.306 15:05:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.307 15:05:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.307 15:05:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.307 15:05:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:54.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:19:54.307 00:19:54.307 --- 10.0.0.2 ping statistics --- 00:19:54.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.307 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:19:54.307 15:05:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:19:54.307 00:19:54.307 --- 10.0.0.1 ping statistics --- 00:19:54.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.307 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:19:54.307 15:05:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.307 15:05:12 -- nvmf/common.sh@410 -- # return 0 00:19:54.307 15:05:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:54.307 15:05:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.307 15:05:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:54.307 15:05:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:54.307 15:05:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.307 15:05:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:54.307 15:05:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:54.307 15:05:12 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:54.307 15:05:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:54.307 15:05:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:54.307 15:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:54.307 15:05:12 -- nvmf/common.sh@469 -- # nvmfpid=3301907 00:19:54.307 15:05:12 -- nvmf/common.sh@470 -- # waitforlisten 3301907 00:19:54.307 15:05:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:54.307 15:05:12 -- common/autotest_common.sh@819 -- # '[' -z 3301907 ']' 00:19:54.307 15:05:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.307 15:05:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:54.307 15:05:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.307 15:05:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:54.307 15:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:54.307 [2024-06-11 15:05:12.847969] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:54.307 [2024-06-11 15:05:12.848034] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:54.307 [2024-06-11 15:05:12.951471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.307 [2024-06-11 15:05:13.064438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:54.307 [2024-06-11 15:05:13.064579] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.307 [2024-06-11 15:05:13.064590] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.307 [2024-06-11 15:05:13.064600] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.307 [2024-06-11 15:05:13.064720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:54.307 [2024-06-11 15:05:13.064829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:54.307 [2024-06-11 15:05:13.064866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:54.307 [2024-06-11 15:05:13.064866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:54.874 15:05:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:54.874 15:05:13 -- common/autotest_common.sh@852 -- # return 0 00:19:54.874 15:05:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:54.874 15:05:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:54.874 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:19:55.137 15:05:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.137 15:05:13 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.137 15:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.138 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:19:55.138 [2024-06-11 15:05:13.740931] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.138 15:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.138 15:05:13 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:55.138 15:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.138 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:19:55.138 Malloc0 00:19:55.138 15:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.138 15:05:13 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.138 15:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.138 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:19:55.138 15:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.138 15:05:13 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.138 15:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.138 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:19:55.138 15:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.138 15:05:13 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.138 15:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.138 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:19:55.138 [2024-06-11 15:05:13.787121] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.138 15:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.138 15:05:13 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:55.138 15:05:13 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:55.138 15:05:13 -- nvmf/common.sh@520 -- # config=() 00:19:55.138 15:05:13 -- nvmf/common.sh@520 -- # local subsystem config 00:19:55.138 15:05:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:55.138 15:05:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:55.138 { 00:19:55.138 "params": { 00:19:55.138 "name": "Nvme$subsystem", 00:19:55.138 "trtype": "$TEST_TRANSPORT", 00:19:55.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.138 "adrfam": "ipv4", 00:19:55.138 "trsvcid": "$NVMF_PORT", 00:19:55.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.138 "hdgst": ${hdgst:-false}, 00:19:55.138 "ddgst": ${ddgst:-false} 00:19:55.138 }, 00:19:55.138 "method": "bdev_nvme_attach_controller" 00:19:55.138 } 00:19:55.138 EOF 00:19:55.138 )") 00:19:55.138 15:05:13 -- nvmf/common.sh@542 -- # cat 00:19:55.138 15:05:13 -- nvmf/common.sh@544 -- # jq . 00:19:55.138 15:05:13 -- nvmf/common.sh@545 -- # IFS=, 00:19:55.138 15:05:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:55.138 "params": { 00:19:55.138 "name": "Nvme1", 00:19:55.138 "trtype": "tcp", 00:19:55.138 "traddr": "10.0.0.2", 00:19:55.138 "adrfam": "ipv4", 00:19:55.138 "trsvcid": "4420", 00:19:55.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.138 "hdgst": false, 00:19:55.138 "ddgst": false 00:19:55.138 }, 00:19:55.138 "method": "bdev_nvme_attach_controller" 00:19:55.138 }' 00:19:55.138 [2024-06-11 15:05:13.836362] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:55.138 [2024-06-11 15:05:13.836421] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3302085 ] 00:19:55.138 [2024-06-11 15:05:13.933808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:55.399 [2024-06-11 15:05:14.047944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.400 [2024-06-11 15:05:14.048050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.400 [2024-06-11 15:05:14.048054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.659 [2024-06-11 15:05:14.244912] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:55.659 [2024-06-11 15:05:14.244948] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:55.659 I/O targets: 00:19:55.659 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:55.659 00:19:55.659 00:19:55.659 CUnit - A unit testing framework for C - Version 2.1-3 00:19:55.659 http://cunit.sourceforge.net/ 00:19:55.659 00:19:55.659 00:19:55.659 Suite: bdevio tests on: Nvme1n1 00:19:55.659 Test: blockdev write read block ...passed 00:19:55.659 Test: blockdev write zeroes read block ...passed 00:19:55.659 Test: blockdev write zeroes read no split ...passed 00:19:55.659 Test: blockdev write zeroes read split ...passed 00:19:55.659 Test: blockdev write zeroes read split partial ...passed 00:19:55.659 Test: blockdev reset ...[2024-06-11 15:05:14.472519] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:55.659 [2024-06-11 15:05:14.472579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b9070 (9): Bad file descriptor 00:19:55.918 [2024-06-11 15:05:14.535151] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:55.918 passed 00:19:55.918 Test: blockdev write read 8 blocks ...passed 00:19:55.918 Test: blockdev write read size > 128k ...passed 00:19:55.918 Test: blockdev write read invalid size ...passed 00:19:55.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:55.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:55.918 Test: blockdev write read max offset ...passed 00:19:55.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:55.918 Test: blockdev writev readv 8 blocks ...passed 00:19:55.918 Test: blockdev writev readv 30 x 1block ...passed 00:19:55.918 Test: blockdev writev readv block ...passed 00:19:55.918 Test: blockdev writev readv size > 128k ...passed 00:19:55.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:55.918 Test: blockdev comparev and writev ...[2024-06-11 15:05:14.756100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.918 [2024-06-11 15:05:14.756127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.918 [2024-06-11 15:05:14.756140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.918 [2024-06-11 15:05:14.756147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.918 [2024-06-11 15:05:14.756561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.918 [2024-06-11 15:05:14.756571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:55.918 [2024-06-11 15:05:14.756581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.918 [2024-06-11 15:05:14.756588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:55.918 [2024-06-11 15:05:14.756969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.918 [2024-06-11 15:05:14.756979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:55.918 [2024-06-11 15:05:14.756989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.918 [2024-06-11 15:05:14.756996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:55.918 [2024-06-11 15:05:14.757406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.918 [2024-06-11 15:05:14.757416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:55.918 [2024-06-11 15:05:14.757426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:55.918 [2024-06-11 15:05:14.757433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:56.178 passed 00:19:56.178 Test: blockdev nvme passthru rw ...passed 00:19:56.178 Test: blockdev nvme passthru vendor specific ...[2024-06-11 15:05:14.840603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.178 [2024-06-11 15:05:14.840616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:56.178 [2024-06-11 15:05:14.840834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.178 [2024-06-11 15:05:14.840843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:56.178 [2024-06-11 15:05:14.841069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.178 [2024-06-11 15:05:14.841077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:56.178 [2024-06-11 15:05:14.841298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.178 [2024-06-11 15:05:14.841306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:56.178 passed 00:19:56.178 Test: blockdev nvme admin passthru ...passed 00:19:56.178 Test: blockdev copy ...passed 00:19:56.178 00:19:56.178 Run Summary: Type Total Ran Passed Failed Inactive 00:19:56.178 suites 1 1 n/a 0 0 00:19:56.178 tests 23 23 23 0 0 00:19:56.178 asserts 152 152 152 0 n/a 00:19:56.178 00:19:56.178 Elapsed time = 1.301 seconds 00:19:56.437 15:05:15 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.437 15:05:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.437 15:05:15 -- common/autotest_common.sh@10 -- # set +x 00:19:56.696 15:05:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.696 15:05:15 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:56.696 15:05:15 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:56.696 15:05:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:56.696 15:05:15 -- nvmf/common.sh@116 -- # sync 00:19:56.696 15:05:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:56.696 15:05:15 -- nvmf/common.sh@119 -- # set +e 00:19:56.696 15:05:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:56.696 15:05:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:56.696 rmmod nvme_tcp 00:19:56.696 rmmod nvme_fabrics 00:19:56.696 rmmod nvme_keyring 00:19:56.696 15:05:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:56.696 15:05:15 -- nvmf/common.sh@123 -- # set -e 00:19:56.696 15:05:15 -- nvmf/common.sh@124 -- # return 0 00:19:56.697 15:05:15 -- nvmf/common.sh@477 -- # '[' -n 3301907 ']' 00:19:56.697 15:05:15 -- nvmf/common.sh@478 -- # killprocess 3301907 00:19:56.697 15:05:15 -- common/autotest_common.sh@926 -- # '[' -z 3301907 ']' 00:19:56.697 15:05:15 -- common/autotest_common.sh@930 -- # kill -0 3301907 00:19:56.697 15:05:15 -- common/autotest_common.sh@931 -- # uname 00:19:56.697 15:05:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:56.697 15:05:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3301907 00:19:56.697 15:05:15 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:56.697 15:05:15 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:56.697 15:05:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3301907' 00:19:56.697 killing process with pid 3301907 00:19:56.697 15:05:15 -- common/autotest_common.sh@945 -- # kill 3301907 00:19:56.697 15:05:15 -- common/autotest_common.sh@950 -- # wait 3301907 00:19:57.264 15:05:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:57.264 15:05:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:57.265 15:05:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:57.265 15:05:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.265 15:05:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:57.265 15:05:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.265 15:05:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.265 15:05:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.170 15:05:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:59.170 00:19:59.170 real 0m11.765s 00:19:59.170 user 0m14.840s 00:19:59.170 sys 0m6.037s 00:19:59.170 15:05:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.170 15:05:17 -- common/autotest_common.sh@10 -- # set +x 00:19:59.170 ************************************ 00:19:59.170 END TEST nvmf_bdevio_no_huge 00:19:59.170 ************************************ 00:19:59.170 15:05:17 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:59.170 15:05:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:59.170 15:05:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:59.170 15:05:17 -- common/autotest_common.sh@10 -- # set +x 00:19:59.170 ************************************ 00:19:59.170 START TEST nvmf_tls 00:19:59.170 ************************************ 00:19:59.170 15:05:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:59.170 * Looking for test storage... 00:19:59.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.429 15:05:18 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.429 15:05:18 -- nvmf/common.sh@7 -- # uname -s 00:19:59.429 15:05:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.429 15:05:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.429 15:05:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.430 15:05:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.430 15:05:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.430 15:05:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.430 15:05:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.430 15:05:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.430 15:05:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.430 15:05:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.430 15:05:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:59.430 15:05:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:59.430 15:05:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.430 15:05:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.430 15:05:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.430 15:05:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.430 15:05:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.430 15:05:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.430 15:05:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.430 15:05:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.430 15:05:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.430 15:05:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.430 15:05:18 -- paths/export.sh@5 -- # export PATH 00:19:59.430 15:05:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.430 15:05:18 -- nvmf/common.sh@46 -- # : 0 00:19:59.430 15:05:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:59.430 15:05:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:59.430 15:05:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:59.430 15:05:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.430 15:05:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.430 15:05:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:59.430 15:05:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:59.430 15:05:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:59.430 15:05:18 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:59.430 15:05:18 -- target/tls.sh@71 -- # nvmftestinit 00:19:59.430 15:05:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:59.430 15:05:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.430 15:05:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:59.430 15:05:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:59.430 15:05:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:59.430 15:05:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.430 15:05:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.430 15:05:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.430 15:05:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:59.430 15:05:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:59.430 15:05:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:59.430 15:05:18 -- common/autotest_common.sh@10 -- # set +x 00:20:05.999 15:05:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:05.999 15:05:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:05.999 15:05:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:05.999 15:05:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:05.999 15:05:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:05.999 15:05:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:05.999 15:05:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:05.999 15:05:24 -- nvmf/common.sh@294 -- # net_devs=() 00:20:05.999 15:05:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:05.999 15:05:24 -- nvmf/common.sh@295 -- # e810=() 00:20:05.999 15:05:24 -- nvmf/common.sh@295 -- # local -ga e810 00:20:05.999 15:05:24 -- nvmf/common.sh@296 -- # x722=() 00:20:05.999 15:05:24 -- nvmf/common.sh@296 -- # local -ga x722 00:20:05.999 15:05:24 -- nvmf/common.sh@297 -- # mlx=() 00:20:05.999 15:05:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:05.999 15:05:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.999 15:05:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:05.999 15:05:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:05.999 15:05:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:05.999 15:05:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:05.999 15:05:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:05.999 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:05.999 15:05:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:05.999 15:05:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:05.999 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:05.999 15:05:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:05.999 15:05:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:05.999 15:05:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.999 15:05:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:05.999 15:05:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.999 15:05:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:05.999 Found net devices under 0000:af:00.0: cvl_0_0 00:20:05.999 15:05:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.999 15:05:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:05.999 15:05:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.999 15:05:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:05.999 15:05:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.999 15:05:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:05.999 Found net devices under 0000:af:00.1: cvl_0_1 00:20:05.999 15:05:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.999 15:05:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:05.999 15:05:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:05.999 15:05:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:05.999 15:05:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.999 15:05:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.999 15:05:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.999 15:05:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:05.999 15:05:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.999 15:05:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.999 15:05:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:05.999 15:05:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.999 15:05:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.999 15:05:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:05.999 15:05:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:05.999 15:05:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.999 15:05:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.999 15:05:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.999 15:05:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.999 15:05:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:05.999 15:05:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.999 15:05:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.999 15:05:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.999 15:05:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:05.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:20:05.999 00:20:05.999 --- 10.0.0.2 ping statistics --- 00:20:05.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.999 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:20:05.999 15:05:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:20:05.999 00:20:05.999 --- 10.0.0.1 ping statistics --- 00:20:05.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.999 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:20:05.999 15:05:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.999 15:05:24 -- nvmf/common.sh@410 -- # return 0 00:20:05.999 15:05:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:05.999 15:05:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.999 15:05:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:05.999 15:05:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.999 15:05:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:05.999 15:05:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:05.999 15:05:24 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:05.999 15:05:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:05.999 15:05:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:05.999 15:05:24 -- common/autotest_common.sh@10 -- # set +x 00:20:05.999 15:05:24 -- nvmf/common.sh@469 -- # nvmfpid=3306443 00:20:06.000 15:05:24 -- nvmf/common.sh@470 -- # waitforlisten 3306443 00:20:06.000 15:05:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:06.000 15:05:24 -- common/autotest_common.sh@819 -- # '[' -z 3306443 ']' 00:20:06.000 15:05:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.000 15:05:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.000 15:05:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.000 15:05:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.000 15:05:24 -- common/autotest_common.sh@10 -- # set +x 00:20:06.000 [2024-06-11 15:05:24.735173] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:06.000 [2024-06-11 15:05:24.735228] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.000 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.000 [2024-06-11 15:05:24.824872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.259 [2024-06-11 15:05:24.911138] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:06.259 [2024-06-11 15:05:24.911287] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.259 [2024-06-11 15:05:24.911298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.259 [2024-06-11 15:05:24.911307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.259 [2024-06-11 15:05:24.911329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.827 15:05:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:06.827 15:05:25 -- common/autotest_common.sh@852 -- # return 0 00:20:06.827 15:05:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:06.827 15:05:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:06.827 15:05:25 -- common/autotest_common.sh@10 -- # set +x 00:20:06.827 15:05:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.827 15:05:25 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:20:06.827 15:05:25 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:07.085 true 00:20:07.085 15:05:25 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:07.085 15:05:25 -- target/tls.sh@82 -- # jq -r .tls_version 00:20:07.344 15:05:26 -- target/tls.sh@82 -- # version=0 00:20:07.344 15:05:26 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:20:07.344 15:05:26 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:07.602 15:05:26 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:07.602 15:05:26 -- target/tls.sh@90 -- # jq -r .tls_version 00:20:07.860 15:05:26 -- target/tls.sh@90 -- # version=13 00:20:07.860 15:05:26 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:20:07.860 15:05:26 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:08.118 15:05:26 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.118 15:05:26 -- target/tls.sh@98 -- # jq -r .tls_version 00:20:08.377 15:05:26 -- target/tls.sh@98 -- # version=7 00:20:08.377 15:05:26 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:20:08.377 15:05:26 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.377 15:05:26 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:08.377 15:05:27 -- target/tls.sh@105 -- # ktls=false 00:20:08.377 15:05:27 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:20:08.377 15:05:27 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:08.636 15:05:27 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.636 15:05:27 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:08.895 15:05:27 -- target/tls.sh@113 -- # ktls=true 00:20:08.895 15:05:27 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:20:08.895 15:05:27 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:09.154 15:05:27 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.154 15:05:27 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:20:09.413 15:05:28 -- target/tls.sh@121 -- # ktls=false 00:20:09.413 15:05:28 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:20:09.413 15:05:28 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:20:09.413 15:05:28 -- target/tls.sh@49 -- # local key hash crc 00:20:09.413 15:05:28 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:20:09.413 15:05:28 -- target/tls.sh@51 -- # hash=01 00:20:09.413 15:05:28 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:20:09.413 15:05:28 -- target/tls.sh@52 -- # gzip -1 -c 00:20:09.413 15:05:28 -- target/tls.sh@52 -- # tail -c8 00:20:09.413 15:05:28 -- target/tls.sh@52 -- # head -c 4 00:20:09.413 15:05:28 -- target/tls.sh@52 -- # crc='p$H�' 00:20:09.413 15:05:28 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:09.413 15:05:28 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:20:09.413 15:05:28 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:09.413 15:05:28 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:09.413 15:05:28 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:20:09.413 15:05:28 -- target/tls.sh@49 -- # local key hash crc 00:20:09.413 15:05:28 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:20:09.413 15:05:28 -- target/tls.sh@51 -- # hash=01 00:20:09.413 15:05:28 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:20:09.413 15:05:28 -- target/tls.sh@52 -- # gzip -1 -c 00:20:09.413 15:05:28 -- target/tls.sh@52 -- # tail -c8 00:20:09.413 15:05:28 -- target/tls.sh@52 -- # head -c 4 00:20:09.413 15:05:28 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:20:09.413 15:05:28 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:09.413 15:05:28 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:20:09.413 15:05:28 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:09.413 15:05:28 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:09.413 15:05:28 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:09.413 15:05:28 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:09.413 15:05:28 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:09.413 15:05:28 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:09.413 15:05:28 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:09.413 15:05:28 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:09.413 15:05:28 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:09.671 15:05:28 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:09.929 15:05:28 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:09.929 15:05:28 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:09.929 15:05:28 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:10.203 [2024-06-11 15:05:28.981502] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.203 15:05:28 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:10.469 15:05:29 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:10.740 [2024-06-11 15:05:29.458781] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.740 [2024-06-11 15:05:29.458990] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.741 15:05:29 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:10.999 malloc0 00:20:10.999 15:05:29 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:11.257 15:05:29 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:11.516 15:05:30 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:11.516 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.492 Initializing NVMe Controllers 00:20:21.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:21.492 Initialization complete. Launching workers. 00:20:21.492 ======================================================== 00:20:21.492 Latency(us) 00:20:21.492 Device Information : IOPS MiB/s Average min max 00:20:21.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11237.62 43.90 5696.17 1199.71 8404.06 00:20:21.492 ======================================================== 00:20:21.492 Total : 11237.62 43.90 5696.17 1199.71 8404.06 00:20:21.492 00:20:21.492 15:05:40 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:21.492 15:05:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:21.492 15:05:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:21.492 15:05:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:21.492 15:05:40 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:21.492 15:05:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.492 15:05:40 -- target/tls.sh@28 -- # bdevperf_pid=3309373 00:20:21.492 15:05:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.492 15:05:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.492 15:05:40 -- target/tls.sh@31 -- # waitforlisten 3309373 /var/tmp/bdevperf.sock 00:20:21.492 15:05:40 -- common/autotest_common.sh@819 -- # '[' -z 3309373 ']' 00:20:21.492 15:05:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.492 15:05:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:21.492 15:05:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.492 15:05:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:21.492 15:05:40 -- common/autotest_common.sh@10 -- # set +x 00:20:21.751 [2024-06-11 15:05:40.363407] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:21.751 [2024-06-11 15:05:40.363466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3309373 ] 00:20:21.751 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.751 [2024-06-11 15:05:40.427001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.752 [2024-06-11 15:05:40.496680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.688 15:05:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:22.688 15:05:41 -- common/autotest_common.sh@852 -- # return 0 00:20:22.688 15:05:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:22.688 [2024-06-11 15:05:41.514223] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.947 TLSTESTn1 00:20:22.947 15:05:41 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:22.947 Running I/O for 10 seconds... 00:20:32.923 00:20:32.923 Latency(us) 00:20:32.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.923 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:32.923 Verification LBA range: start 0x0 length 0x2000 00:20:32.923 TLSTESTn1 : 10.02 2399.34 9.37 0.00 0.00 53294.09 8460.10 80549.70 00:20:32.923 =================================================================================================================== 00:20:32.923 Total : 2399.34 9.37 0.00 0.00 53294.09 8460.10 80549.70 00:20:33.182 0 00:20:33.182 15:05:51 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.182 15:05:51 -- target/tls.sh@45 -- # killprocess 3309373 00:20:33.182 15:05:51 -- common/autotest_common.sh@926 -- # '[' -z 3309373 ']' 00:20:33.182 15:05:51 -- common/autotest_common.sh@930 -- # kill -0 3309373 00:20:33.182 15:05:51 -- common/autotest_common.sh@931 -- # uname 00:20:33.182 15:05:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:33.182 15:05:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3309373 00:20:33.182 15:05:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:33.182 15:05:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:33.182 15:05:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3309373' 00:20:33.182 killing process with pid 3309373 00:20:33.182 15:05:51 -- common/autotest_common.sh@945 -- # kill 3309373 00:20:33.182 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.182 00:20:33.182 Latency(us) 00:20:33.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.182 =================================================================================================================== 00:20:33.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.182 15:05:51 -- common/autotest_common.sh@950 -- # wait 3309373 00:20:33.441 15:05:52 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:33.441 15:05:52 -- common/autotest_common.sh@640 -- # local es=0 00:20:33.441 15:05:52 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:33.441 15:05:52 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:33.442 15:05:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:33.442 15:05:52 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:33.442 15:05:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:33.442 15:05:52 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:33.442 15:05:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:33.442 15:05:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:33.442 15:05:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:33.442 15:05:52 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:33.442 15:05:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.442 15:05:52 -- target/tls.sh@28 -- # bdevperf_pid=3311498 00:20:33.442 15:05:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:33.442 15:05:52 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:33.442 15:05:52 -- target/tls.sh@31 -- # waitforlisten 3311498 /var/tmp/bdevperf.sock 00:20:33.442 15:05:52 -- common/autotest_common.sh@819 -- # '[' -z 3311498 ']' 00:20:33.442 15:05:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.442 15:05:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:33.442 15:05:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.442 15:05:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:33.442 15:05:52 -- common/autotest_common.sh@10 -- # set +x 00:20:33.442 [2024-06-11 15:05:52.080169] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:33.442 [2024-06-11 15:05:52.080226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311498 ] 00:20:33.442 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.442 [2024-06-11 15:05:52.142568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.442 [2024-06-11 15:05:52.207604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.379 15:05:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:34.379 15:05:53 -- common/autotest_common.sh@852 -- # return 0 00:20:34.379 15:05:53 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:34.639 [2024-06-11 15:05:53.228284] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.639 [2024-06-11 15:05:53.236443] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:34.639 [2024-06-11 15:05:53.236536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f0600 (107): Transport endpoint is not connected 00:20:34.639 [2024-06-11 15:05:53.237470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f0600 (9): Bad file descriptor 00:20:34.639 [2024-06-11 15:05:53.238471] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:34.639 [2024-06-11 15:05:53.238479] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:34.639 [2024-06-11 15:05:53.238487] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:34.639 request: 00:20:34.639 { 00:20:34.639 "name": "TLSTEST", 00:20:34.639 "trtype": "tcp", 00:20:34.639 "traddr": "10.0.0.2", 00:20:34.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.639 "adrfam": "ipv4", 00:20:34.639 "trsvcid": "4420", 00:20:34.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.639 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:34.639 "method": "bdev_nvme_attach_controller", 00:20:34.639 "req_id": 1 00:20:34.639 } 00:20:34.639 Got JSON-RPC error response 00:20:34.639 response: 00:20:34.639 { 00:20:34.639 "code": -32602, 00:20:34.639 "message": "Invalid parameters" 00:20:34.639 } 00:20:34.639 15:05:53 -- target/tls.sh@36 -- # killprocess 3311498 00:20:34.639 15:05:53 -- common/autotest_common.sh@926 -- # '[' -z 3311498 ']' 00:20:34.639 15:05:53 -- common/autotest_common.sh@930 -- # kill -0 3311498 00:20:34.639 15:05:53 -- common/autotest_common.sh@931 -- # uname 00:20:34.639 15:05:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.639 15:05:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3311498 00:20:34.639 15:05:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:34.639 15:05:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:34.639 15:05:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3311498' 00:20:34.639 killing process with pid 3311498 00:20:34.639 15:05:53 -- common/autotest_common.sh@945 -- # kill 3311498 00:20:34.639 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.639 00:20:34.639 Latency(us) 00:20:34.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.639 =================================================================================================================== 00:20:34.639 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.639 15:05:53 -- common/autotest_common.sh@950 -- # wait 3311498 00:20:34.899 15:05:53 -- target/tls.sh@37 -- # return 1 00:20:34.899 15:05:53 -- common/autotest_common.sh@643 -- # es=1 00:20:34.899 15:05:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:34.899 15:05:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:34.899 15:05:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:34.899 15:05:53 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:34.899 15:05:53 -- common/autotest_common.sh@640 -- # local es=0 00:20:34.899 15:05:53 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:34.899 15:05:53 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:34.899 15:05:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:34.899 15:05:53 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:34.899 15:05:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:34.899 15:05:53 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:34.899 15:05:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.899 15:05:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.899 15:05:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:34.899 15:05:53 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:34.899 15:05:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.899 15:05:53 -- target/tls.sh@28 -- # bdevperf_pid=3311771 00:20:34.899 15:05:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.899 15:05:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.899 15:05:53 -- target/tls.sh@31 -- # waitforlisten 3311771 /var/tmp/bdevperf.sock 00:20:34.899 15:05:53 -- common/autotest_common.sh@819 -- # '[' -z 3311771 ']' 00:20:34.899 15:05:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.899 15:05:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:34.899 15:05:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.899 15:05:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:34.899 15:05:53 -- common/autotest_common.sh@10 -- # set +x 00:20:34.899 [2024-06-11 15:05:53.542286] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:34.899 [2024-06-11 15:05:53.542344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311771 ] 00:20:34.899 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.899 [2024-06-11 15:05:53.605589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.899 [2024-06-11 15:05:53.675299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.833 15:05:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:35.833 15:05:54 -- common/autotest_common.sh@852 -- # return 0 00:20:35.833 15:05:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:36.092 [2024-06-11 15:05:54.693072] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.092 [2024-06-11 15:05:54.700114] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:36.092 [2024-06-11 15:05:54.700146] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:36.092 [2024-06-11 15:05:54.700177] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:36.092 [2024-06-11 15:05:54.701339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92600 (107): Transport endpoint is not connected 00:20:36.092 [2024-06-11 15:05:54.702332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92600 (9): Bad file descriptor 00:20:36.092 [2024-06-11 15:05:54.703334] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:36.092 [2024-06-11 15:05:54.703342] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:36.092 [2024-06-11 15:05:54.703350] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:36.092 request: 00:20:36.092 { 00:20:36.092 "name": "TLSTEST", 00:20:36.092 "trtype": "tcp", 00:20:36.092 "traddr": "10.0.0.2", 00:20:36.092 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.092 "adrfam": "ipv4", 00:20:36.092 "trsvcid": "4420", 00:20:36.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.092 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:36.092 "method": "bdev_nvme_attach_controller", 00:20:36.092 "req_id": 1 00:20:36.092 } 00:20:36.092 Got JSON-RPC error response 00:20:36.092 response: 00:20:36.092 { 00:20:36.092 "code": -32602, 00:20:36.092 "message": "Invalid parameters" 00:20:36.092 } 00:20:36.092 15:05:54 -- target/tls.sh@36 -- # killprocess 3311771 00:20:36.092 15:05:54 -- common/autotest_common.sh@926 -- # '[' -z 3311771 ']' 00:20:36.092 15:05:54 -- common/autotest_common.sh@930 -- # kill -0 3311771 00:20:36.092 15:05:54 -- common/autotest_common.sh@931 -- # uname 00:20:36.092 15:05:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:36.092 15:05:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3311771 00:20:36.092 15:05:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:36.092 15:05:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:36.092 15:05:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3311771' 00:20:36.092 killing process with pid 3311771 00:20:36.092 15:05:54 -- common/autotest_common.sh@945 -- # kill 3311771 00:20:36.092 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.092 00:20:36.092 Latency(us) 00:20:36.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.092 =================================================================================================================== 00:20:36.092 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.092 15:05:54 -- common/autotest_common.sh@950 -- # wait 3311771 00:20:36.352 15:05:54 -- target/tls.sh@37 -- # return 1 00:20:36.352 15:05:54 -- common/autotest_common.sh@643 -- # es=1 00:20:36.352 15:05:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:36.352 15:05:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:36.352 15:05:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:36.352 15:05:54 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:36.352 15:05:54 -- common/autotest_common.sh@640 -- # local es=0 00:20:36.352 15:05:54 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:36.352 15:05:54 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:36.352 15:05:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:36.352 15:05:54 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:36.352 15:05:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:36.352 15:05:54 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:36.352 15:05:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.352 15:05:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:36.352 15:05:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.352 15:05:54 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:36.352 15:05:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.352 15:05:54 -- target/tls.sh@28 -- # bdevperf_pid=3312047 00:20:36.352 15:05:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.352 15:05:54 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.352 15:05:54 -- target/tls.sh@31 -- # waitforlisten 3312047 /var/tmp/bdevperf.sock 00:20:36.352 15:05:54 -- common/autotest_common.sh@819 -- # '[' -z 3312047 ']' 00:20:36.352 15:05:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.352 15:05:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:36.352 15:05:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.352 15:05:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:36.352 15:05:54 -- common/autotest_common.sh@10 -- # set +x 00:20:36.352 [2024-06-11 15:05:55.014307] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:36.352 [2024-06-11 15:05:55.014368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312047 ] 00:20:36.352 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.352 [2024-06-11 15:05:55.078785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.352 [2024-06-11 15:05:55.139450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.287 15:05:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:37.287 15:05:55 -- common/autotest_common.sh@852 -- # return 0 00:20:37.287 15:05:55 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:37.546 [2024-06-11 15:05:56.176254] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.546 [2024-06-11 15:05:56.183918] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:37.546 [2024-06-11 15:05:56.183947] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:37.546 [2024-06-11 15:05:56.183979] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:37.546 [2024-06-11 15:05:56.184588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bc600 (107): Transport endpoint is not connected 00:20:37.546 [2024-06-11 15:05:56.185581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bc600 (9): Bad file descriptor 00:20:37.546 [2024-06-11 15:05:56.186583] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:37.546 [2024-06-11 15:05:56.186591] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:37.546 [2024-06-11 15:05:56.186599] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:37.546 request: 00:20:37.546 { 00:20:37.547 "name": "TLSTEST", 00:20:37.547 "trtype": "tcp", 00:20:37.547 "traddr": "10.0.0.2", 00:20:37.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.547 "adrfam": "ipv4", 00:20:37.547 "trsvcid": "4420", 00:20:37.547 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.547 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:37.547 "method": "bdev_nvme_attach_controller", 00:20:37.547 "req_id": 1 00:20:37.547 } 00:20:37.547 Got JSON-RPC error response 00:20:37.547 response: 00:20:37.547 { 00:20:37.547 "code": -32602, 00:20:37.547 "message": "Invalid parameters" 00:20:37.547 } 00:20:37.547 15:05:56 -- target/tls.sh@36 -- # killprocess 3312047 00:20:37.547 15:05:56 -- common/autotest_common.sh@926 -- # '[' -z 3312047 ']' 00:20:37.547 15:05:56 -- common/autotest_common.sh@930 -- # kill -0 3312047 00:20:37.547 15:05:56 -- common/autotest_common.sh@931 -- # uname 00:20:37.547 15:05:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.547 15:05:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3312047 00:20:37.547 15:05:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:37.547 15:05:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:37.547 15:05:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3312047' 00:20:37.547 killing process with pid 3312047 00:20:37.547 15:05:56 -- common/autotest_common.sh@945 -- # kill 3312047 00:20:37.547 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.547 00:20:37.547 Latency(us) 00:20:37.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.547 =================================================================================================================== 00:20:37.547 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.547 15:05:56 -- common/autotest_common.sh@950 -- # wait 3312047 00:20:37.806 15:05:56 -- target/tls.sh@37 -- # return 1 00:20:37.806 15:05:56 -- common/autotest_common.sh@643 -- # es=1 00:20:37.806 15:05:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:37.806 15:05:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:37.806 15:05:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:37.806 15:05:56 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.806 15:05:56 -- common/autotest_common.sh@640 -- # local es=0 00:20:37.806 15:05:56 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.806 15:05:56 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:37.806 15:05:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.806 15:05:56 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:37.806 15:05:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.806 15:05:56 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.806 15:05:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.806 15:05:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.806 15:05:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.806 15:05:56 -- target/tls.sh@23 -- # psk= 00:20:37.806 15:05:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.806 15:05:56 -- target/tls.sh@28 -- # bdevperf_pid=3312321 00:20:37.806 15:05:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.806 15:05:56 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.806 15:05:56 -- target/tls.sh@31 -- # waitforlisten 3312321 /var/tmp/bdevperf.sock 00:20:37.806 15:05:56 -- common/autotest_common.sh@819 -- # '[' -z 3312321 ']' 00:20:37.806 15:05:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.806 15:05:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:37.806 15:05:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.806 15:05:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:37.806 15:05:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.806 [2024-06-11 15:05:56.499265] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:37.806 [2024-06-11 15:05:56.499329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312321 ] 00:20:37.806 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.806 [2024-06-11 15:05:56.563364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.806 [2024-06-11 15:05:56.625763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.741 15:05:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.741 15:05:57 -- common/autotest_common.sh@852 -- # return 0 00:20:38.741 15:05:57 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:39.000 [2024-06-11 15:05:57.667852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:39.000 [2024-06-11 15:05:57.669098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995c80 (9): Bad file descriptor 00:20:39.000 [2024-06-11 15:05:57.670096] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.000 [2024-06-11 15:05:57.670106] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:39.000 [2024-06-11 15:05:57.670114] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.000 request: 00:20:39.000 { 00:20:39.000 "name": "TLSTEST", 00:20:39.000 "trtype": "tcp", 00:20:39.000 "traddr": "10.0.0.2", 00:20:39.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.000 "adrfam": "ipv4", 00:20:39.000 "trsvcid": "4420", 00:20:39.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.000 "method": "bdev_nvme_attach_controller", 00:20:39.000 "req_id": 1 00:20:39.000 } 00:20:39.000 Got JSON-RPC error response 00:20:39.000 response: 00:20:39.000 { 00:20:39.000 "code": -32602, 00:20:39.000 "message": "Invalid parameters" 00:20:39.000 } 00:20:39.000 15:05:57 -- target/tls.sh@36 -- # killprocess 3312321 00:20:39.000 15:05:57 -- common/autotest_common.sh@926 -- # '[' -z 3312321 ']' 00:20:39.000 15:05:57 -- common/autotest_common.sh@930 -- # kill -0 3312321 00:20:39.000 15:05:57 -- common/autotest_common.sh@931 -- # uname 00:20:39.000 15:05:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:39.000 15:05:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3312321 00:20:39.000 15:05:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:39.000 15:05:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:39.000 15:05:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3312321' 00:20:39.000 killing process with pid 3312321 00:20:39.000 15:05:57 -- common/autotest_common.sh@945 -- # kill 3312321 00:20:39.000 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.000 00:20:39.000 Latency(us) 00:20:39.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.000 =================================================================================================================== 00:20:39.000 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:39.000 15:05:57 -- common/autotest_common.sh@950 -- # wait 3312321 00:20:39.258 15:05:57 -- target/tls.sh@37 -- # return 1 00:20:39.258 15:05:57 -- common/autotest_common.sh@643 -- # es=1 00:20:39.258 15:05:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:39.258 15:05:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:39.258 15:05:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:39.258 15:05:57 -- target/tls.sh@167 -- # killprocess 3306443 00:20:39.258 15:05:57 -- common/autotest_common.sh@926 -- # '[' -z 3306443 ']' 00:20:39.258 15:05:57 -- common/autotest_common.sh@930 -- # kill -0 3306443 00:20:39.258 15:05:57 -- common/autotest_common.sh@931 -- # uname 00:20:39.258 15:05:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:39.258 15:05:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3306443 00:20:39.258 15:05:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:39.258 15:05:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:39.258 15:05:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3306443' 00:20:39.258 killing process with pid 3306443 00:20:39.258 15:05:57 -- common/autotest_common.sh@945 -- # kill 3306443 00:20:39.258 15:05:57 -- common/autotest_common.sh@950 -- # wait 3306443 00:20:39.517 15:05:58 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:39.517 15:05:58 -- target/tls.sh@49 -- # local key hash crc 00:20:39.517 15:05:58 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:39.517 15:05:58 -- target/tls.sh@51 -- # hash=02 00:20:39.517 15:05:58 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:39.517 15:05:58 -- target/tls.sh@52 -- # head -c 4 00:20:39.517 15:05:58 -- target/tls.sh@52 -- # gzip -1 -c 00:20:39.517 15:05:58 -- target/tls.sh@52 -- # tail -c8 00:20:39.517 15:05:58 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:39.517 15:05:58 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:39.517 15:05:58 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:39.517 15:05:58 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:39.517 15:05:58 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:39.517 15:05:58 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.517 15:05:58 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:39.517 15:05:58 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.517 15:05:58 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:39.517 15:05:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:39.517 15:05:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:39.517 15:05:58 -- common/autotest_common.sh@10 -- # set +x 00:20:39.517 15:05:58 -- nvmf/common.sh@469 -- # nvmfpid=3312617 00:20:39.517 15:05:58 -- nvmf/common.sh@470 -- # waitforlisten 3312617 00:20:39.517 15:05:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:39.517 15:05:58 -- common/autotest_common.sh@819 -- # '[' -z 3312617 ']' 00:20:39.517 15:05:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.517 15:05:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:39.517 15:05:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.517 15:05:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:39.517 15:05:58 -- common/autotest_common.sh@10 -- # set +x 00:20:39.517 [2024-06-11 15:05:58.291088] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:39.517 [2024-06-11 15:05:58.291143] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.517 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.776 [2024-06-11 15:05:58.377692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.776 [2024-06-11 15:05:58.463517] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:39.776 [2024-06-11 15:05:58.463663] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.776 [2024-06-11 15:05:58.463674] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.776 [2024-06-11 15:05:58.463684] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.776 [2024-06-11 15:05:58.463704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.713 15:05:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:40.713 15:05:59 -- common/autotest_common.sh@852 -- # return 0 00:20:40.713 15:05:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:40.713 15:05:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:40.713 15:05:59 -- common/autotest_common.sh@10 -- # set +x 00:20:40.713 15:05:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.713 15:05:59 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.713 15:05:59 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.713 15:05:59 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.713 [2024-06-11 15:05:59.470046] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.713 15:05:59 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.972 15:05:59 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:41.230 [2024-06-11 15:05:59.939303] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.230 [2024-06-11 15:05:59.939506] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.231 15:05:59 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:41.490 malloc0 00:20:41.490 15:06:00 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:41.748 15:06:00 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:42.008 15:06:00 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:42.008 15:06:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:42.008 15:06:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:42.008 15:06:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:42.008 15:06:00 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:42.008 15:06:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.008 15:06:00 -- target/tls.sh@28 -- # bdevperf_pid=3313207 00:20:42.008 15:06:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.008 15:06:00 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.008 15:06:00 -- target/tls.sh@31 -- # waitforlisten 3313207 /var/tmp/bdevperf.sock 00:20:42.008 15:06:00 -- common/autotest_common.sh@819 -- # '[' -z 3313207 ']' 00:20:42.008 15:06:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.008 15:06:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:42.008 15:06:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.008 15:06:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:42.008 15:06:00 -- common/autotest_common.sh@10 -- # set +x 00:20:42.008 [2024-06-11 15:06:00.710351] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:42.008 [2024-06-11 15:06:00.710416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3313207 ] 00:20:42.008 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.008 [2024-06-11 15:06:00.775720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.008 [2024-06-11 15:06:00.843922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.944 15:06:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:42.944 15:06:01 -- common/autotest_common.sh@852 -- # return 0 00:20:42.945 15:06:01 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:43.203 [2024-06-11 15:06:01.857100] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.203 TLSTESTn1 00:20:43.203 15:06:01 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:43.467 Running I/O for 10 seconds... 00:20:53.460 00:20:53.460 Latency(us) 00:20:53.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.460 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:53.460 Verification LBA range: start 0x0 length 0x2000 00:20:53.460 TLSTESTn1 : 10.02 2776.63 10.85 0.00 0.00 46054.42 7923.90 58863.24 00:20:53.460 =================================================================================================================== 00:20:53.460 Total : 2776.63 10.85 0.00 0.00 46054.42 7923.90 58863.24 00:20:53.460 0 00:20:53.460 15:06:12 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:53.460 15:06:12 -- target/tls.sh@45 -- # killprocess 3313207 00:20:53.460 15:06:12 -- common/autotest_common.sh@926 -- # '[' -z 3313207 ']' 00:20:53.460 15:06:12 -- common/autotest_common.sh@930 -- # kill -0 3313207 00:20:53.460 15:06:12 -- common/autotest_common.sh@931 -- # uname 00:20:53.460 15:06:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:53.460 15:06:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3313207 00:20:53.460 15:06:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:53.460 15:06:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:53.460 15:06:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3313207' 00:20:53.460 killing process with pid 3313207 00:20:53.460 15:06:12 -- common/autotest_common.sh@945 -- # kill 3313207 00:20:53.461 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.461 00:20:53.461 Latency(us) 00:20:53.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.461 =================================================================================================================== 00:20:53.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.461 15:06:12 -- common/autotest_common.sh@950 -- # wait 3313207 00:20:53.724 15:06:12 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:53.724 15:06:12 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:53.724 15:06:12 -- common/autotest_common.sh@640 -- # local es=0 00:20:53.724 15:06:12 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:53.724 15:06:12 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:53.724 15:06:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:53.724 15:06:12 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:53.724 15:06:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:53.724 15:06:12 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:53.724 15:06:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:53.724 15:06:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:53.724 15:06:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:53.724 15:06:12 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:53.724 15:06:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.724 15:06:12 -- target/tls.sh@28 -- # bdevperf_pid=3315780 00:20:53.724 15:06:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:53.724 15:06:12 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.724 15:06:12 -- target/tls.sh@31 -- # waitforlisten 3315780 /var/tmp/bdevperf.sock 00:20:53.724 15:06:12 -- common/autotest_common.sh@819 -- # '[' -z 3315780 ']' 00:20:53.724 15:06:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.724 15:06:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:53.724 15:06:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.724 15:06:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:53.724 15:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:53.724 [2024-06-11 15:06:12.438773] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:53.724 [2024-06-11 15:06:12.438839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315780 ] 00:20:53.724 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.724 [2024-06-11 15:06:12.503670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.724 [2024-06-11 15:06:12.566138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.660 15:06:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:54.660 15:06:13 -- common/autotest_common.sh@852 -- # return 0 00:20:54.660 15:06:13 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:54.920 [2024-06-11 15:06:13.594767] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.920 [2024-06-11 15:06:13.594800] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:54.920 request: 00:20:54.920 { 00:20:54.920 "name": "TLSTEST", 00:20:54.920 "trtype": "tcp", 00:20:54.920 "traddr": "10.0.0.2", 00:20:54.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.920 "adrfam": "ipv4", 00:20:54.920 "trsvcid": "4420", 00:20:54.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.920 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:54.920 "method": "bdev_nvme_attach_controller", 00:20:54.920 "req_id": 1 00:20:54.920 } 00:20:54.920 Got JSON-RPC error response 00:20:54.920 response: 00:20:54.920 { 00:20:54.920 "code": -22, 00:20:54.920 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:54.920 } 00:20:54.920 15:06:13 -- target/tls.sh@36 -- # killprocess 3315780 00:20:54.920 15:06:13 -- common/autotest_common.sh@926 -- # '[' -z 3315780 ']' 00:20:54.920 15:06:13 -- common/autotest_common.sh@930 -- # kill -0 3315780 00:20:54.920 15:06:13 -- common/autotest_common.sh@931 -- # uname 00:20:54.920 15:06:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:54.920 15:06:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3315780 00:20:54.920 15:06:13 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:54.920 15:06:13 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:54.920 15:06:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3315780' 00:20:54.920 killing process with pid 3315780 00:20:54.920 15:06:13 -- common/autotest_common.sh@945 -- # kill 3315780 00:20:54.920 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.920 00:20:54.920 Latency(us) 00:20:54.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.920 =================================================================================================================== 00:20:54.920 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:54.920 15:06:13 -- common/autotest_common.sh@950 -- # wait 3315780 00:20:55.180 15:06:13 -- target/tls.sh@37 -- # return 1 00:20:55.180 15:06:13 -- common/autotest_common.sh@643 -- # es=1 00:20:55.180 15:06:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:55.180 15:06:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:55.180 15:06:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:55.180 15:06:13 -- target/tls.sh@183 -- # killprocess 3312617 00:20:55.180 15:06:13 -- common/autotest_common.sh@926 -- # '[' -z 3312617 ']' 00:20:55.180 15:06:13 -- common/autotest_common.sh@930 -- # kill -0 3312617 00:20:55.180 15:06:13 -- common/autotest_common.sh@931 -- # uname 00:20:55.180 15:06:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:55.180 15:06:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3312617 00:20:55.180 15:06:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:55.180 15:06:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:55.180 15:06:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3312617' 00:20:55.180 killing process with pid 3312617 00:20:55.180 15:06:13 -- common/autotest_common.sh@945 -- # kill 3312617 00:20:55.180 15:06:13 -- common/autotest_common.sh@950 -- # wait 3312617 00:20:55.439 15:06:14 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:55.439 15:06:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:55.439 15:06:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:55.439 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:20:55.439 15:06:14 -- nvmf/common.sh@469 -- # nvmfpid=3316093 00:20:55.439 15:06:14 -- nvmf/common.sh@470 -- # waitforlisten 3316093 00:20:55.439 15:06:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:55.439 15:06:14 -- common/autotest_common.sh@819 -- # '[' -z 3316093 ']' 00:20:55.439 15:06:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.439 15:06:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:55.439 15:06:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.439 15:06:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:55.439 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:20:55.439 [2024-06-11 15:06:14.205311] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:55.439 [2024-06-11 15:06:14.205372] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.439 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.698 [2024-06-11 15:06:14.292158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.698 [2024-06-11 15:06:14.377844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:55.698 [2024-06-11 15:06:14.377985] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.698 [2024-06-11 15:06:14.377997] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.698 [2024-06-11 15:06:14.378006] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.698 [2024-06-11 15:06:14.378040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.635 15:06:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:56.635 15:06:15 -- common/autotest_common.sh@852 -- # return 0 00:20:56.635 15:06:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:56.635 15:06:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:56.635 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:20:56.635 15:06:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.635 15:06:15 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:56.635 15:06:15 -- common/autotest_common.sh@640 -- # local es=0 00:20:56.635 15:06:15 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:56.635 15:06:15 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:56.635 15:06:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:56.635 15:06:15 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:56.635 15:06:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:56.635 15:06:15 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:56.635 15:06:15 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:56.635 15:06:15 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:56.635 [2024-06-11 15:06:15.391068] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.635 15:06:15 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:56.894 15:06:15 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:57.153 [2024-06-11 15:06:15.856310] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.153 [2024-06-11 15:06:15.856505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.153 15:06:15 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:57.411 malloc0 00:20:57.411 15:06:16 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.738 15:06:16 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:57.738 [2024-06-11 15:06:16.551355] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:57.738 [2024-06-11 15:06:16.551386] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:57.738 [2024-06-11 15:06:16.551409] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:57.738 request: 00:20:57.738 { 00:20:57.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.738 "host": "nqn.2016-06.io.spdk:host1", 00:20:57.738 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:57.738 "method": "nvmf_subsystem_add_host", 00:20:57.738 "req_id": 1 00:20:57.738 } 00:20:57.738 Got JSON-RPC error response 00:20:57.738 response: 00:20:57.738 { 00:20:57.738 "code": -32603, 00:20:57.738 "message": "Internal error" 00:20:57.738 } 00:20:57.738 15:06:16 -- common/autotest_common.sh@643 -- # es=1 00:20:57.738 15:06:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:57.738 15:06:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:57.738 15:06:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:57.738 15:06:16 -- target/tls.sh@189 -- # killprocess 3316093 00:20:57.738 15:06:16 -- common/autotest_common.sh@926 -- # '[' -z 3316093 ']' 00:20:57.738 15:06:16 -- common/autotest_common.sh@930 -- # kill -0 3316093 00:20:58.037 15:06:16 -- common/autotest_common.sh@931 -- # uname 00:20:58.037 15:06:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:58.037 15:06:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3316093 00:20:58.037 15:06:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:58.037 15:06:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:58.037 15:06:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3316093' 00:20:58.037 killing process with pid 3316093 00:20:58.037 15:06:16 -- common/autotest_common.sh@945 -- # kill 3316093 00:20:58.037 15:06:16 -- common/autotest_common.sh@950 -- # wait 3316093 00:20:58.037 15:06:16 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:58.037 15:06:16 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:58.037 15:06:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:58.037 15:06:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:58.038 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.038 15:06:16 -- nvmf/common.sh@469 -- # nvmfpid=3316643 00:20:58.038 15:06:16 -- nvmf/common.sh@470 -- # waitforlisten 3316643 00:20:58.038 15:06:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:58.038 15:06:16 -- common/autotest_common.sh@819 -- # '[' -z 3316643 ']' 00:20:58.038 15:06:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.038 15:06:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:58.038 15:06:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.038 15:06:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:58.038 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.297 [2024-06-11 15:06:16.911078] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:58.297 [2024-06-11 15:06:16.911122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.297 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.297 [2024-06-11 15:06:16.982165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.297 [2024-06-11 15:06:17.062863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:58.297 [2024-06-11 15:06:17.063008] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.297 [2024-06-11 15:06:17.063019] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.297 [2024-06-11 15:06:17.063039] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.297 [2024-06-11 15:06:17.063061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.232 15:06:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:59.232 15:06:17 -- common/autotest_common.sh@852 -- # return 0 00:20:59.232 15:06:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:59.232 15:06:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:59.232 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:20:59.232 15:06:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.232 15:06:17 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:59.232 15:06:17 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:59.232 15:06:17 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:59.489 [2024-06-11 15:06:18.092546] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.489 15:06:18 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:59.749 15:06:18 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:59.749 [2024-06-11 15:06:18.561807] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.749 [2024-06-11 15:06:18.562003] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.749 15:06:18 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:00.008 malloc0 00:21:00.008 15:06:18 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.266 15:06:19 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:00.525 15:06:19 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.525 15:06:19 -- target/tls.sh@197 -- # bdevperf_pid=3316957 00:21:00.525 15:06:19 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.525 15:06:19 -- target/tls.sh@200 -- # waitforlisten 3316957 /var/tmp/bdevperf.sock 00:21:00.525 15:06:19 -- common/autotest_common.sh@819 -- # '[' -z 3316957 ']' 00:21:00.525 15:06:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.525 15:06:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:00.525 15:06:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.525 15:06:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:00.525 15:06:19 -- common/autotest_common.sh@10 -- # set +x 00:21:00.525 [2024-06-11 15:06:19.317851] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:00.525 [2024-06-11 15:06:19.317911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3316957 ] 00:21:00.525 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.783 [2024-06-11 15:06:19.383013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.783 [2024-06-11 15:06:19.453123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.718 15:06:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:01.718 15:06:20 -- common/autotest_common.sh@852 -- # return 0 00:21:01.718 15:06:20 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:01.718 [2024-06-11 15:06:20.474664] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.718 TLSTESTn1 00:21:01.976 15:06:20 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:02.235 15:06:20 -- target/tls.sh@205 -- # tgtconf='{ 00:21:02.235 "subsystems": [ 00:21:02.235 { 00:21:02.235 "subsystem": "iobuf", 00:21:02.235 "config": [ 00:21:02.235 { 00:21:02.235 "method": "iobuf_set_options", 00:21:02.235 "params": { 00:21:02.235 "small_pool_count": 8192, 00:21:02.235 "large_pool_count": 1024, 00:21:02.235 "small_bufsize": 8192, 00:21:02.235 "large_bufsize": 135168 00:21:02.235 } 00:21:02.235 } 00:21:02.235 ] 00:21:02.235 }, 00:21:02.235 { 00:21:02.235 "subsystem": "sock", 00:21:02.235 "config": [ 00:21:02.235 { 00:21:02.235 "method": "sock_impl_set_options", 00:21:02.235 "params": { 00:21:02.235 "impl_name": "posix", 00:21:02.235 "recv_buf_size": 2097152, 00:21:02.235 "send_buf_size": 2097152, 00:21:02.235 "enable_recv_pipe": true, 00:21:02.235 "enable_quickack": false, 00:21:02.235 "enable_placement_id": 0, 00:21:02.235 "enable_zerocopy_send_server": true, 00:21:02.235 "enable_zerocopy_send_client": false, 00:21:02.235 "zerocopy_threshold": 0, 00:21:02.235 "tls_version": 0, 00:21:02.235 "enable_ktls": false 00:21:02.235 } 00:21:02.235 }, 00:21:02.235 { 00:21:02.235 "method": "sock_impl_set_options", 00:21:02.235 "params": { 00:21:02.235 "impl_name": "ssl", 00:21:02.235 "recv_buf_size": 4096, 00:21:02.235 "send_buf_size": 4096, 00:21:02.235 "enable_recv_pipe": true, 00:21:02.235 "enable_quickack": false, 00:21:02.235 "enable_placement_id": 0, 00:21:02.235 "enable_zerocopy_send_server": true, 00:21:02.235 "enable_zerocopy_send_client": false, 00:21:02.235 "zerocopy_threshold": 0, 00:21:02.235 "tls_version": 0, 00:21:02.235 "enable_ktls": false 00:21:02.235 } 00:21:02.235 } 00:21:02.235 ] 00:21:02.235 }, 00:21:02.235 { 00:21:02.235 "subsystem": "vmd", 00:21:02.235 "config": [] 00:21:02.235 }, 00:21:02.235 { 00:21:02.235 "subsystem": "accel", 00:21:02.235 "config": [ 00:21:02.235 { 00:21:02.235 "method": "accel_set_options", 00:21:02.235 "params": { 00:21:02.235 "small_cache_size": 128, 00:21:02.235 "large_cache_size": 16, 00:21:02.235 "task_count": 2048, 00:21:02.235 "sequence_count": 2048, 00:21:02.235 "buf_count": 2048 00:21:02.235 } 00:21:02.235 } 00:21:02.235 ] 00:21:02.235 }, 00:21:02.235 { 00:21:02.235 "subsystem": "bdev", 00:21:02.235 "config": [ 00:21:02.235 { 00:21:02.235 "method": "bdev_set_options", 00:21:02.235 "params": { 00:21:02.236 "bdev_io_pool_size": 65535, 00:21:02.236 "bdev_io_cache_size": 256, 00:21:02.236 "bdev_auto_examine": true, 00:21:02.236 "iobuf_small_cache_size": 128, 00:21:02.236 "iobuf_large_cache_size": 16 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "bdev_raid_set_options", 00:21:02.236 "params": { 00:21:02.236 "process_window_size_kb": 1024 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "bdev_iscsi_set_options", 00:21:02.236 "params": { 00:21:02.236 "timeout_sec": 30 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "bdev_nvme_set_options", 00:21:02.236 "params": { 00:21:02.236 "action_on_timeout": "none", 00:21:02.236 "timeout_us": 0, 00:21:02.236 "timeout_admin_us": 0, 00:21:02.236 "keep_alive_timeout_ms": 10000, 00:21:02.236 "transport_retry_count": 4, 00:21:02.236 "arbitration_burst": 0, 00:21:02.236 "low_priority_weight": 0, 00:21:02.236 "medium_priority_weight": 0, 00:21:02.236 "high_priority_weight": 0, 00:21:02.236 "nvme_adminq_poll_period_us": 10000, 00:21:02.236 "nvme_ioq_poll_period_us": 0, 00:21:02.236 "io_queue_requests": 0, 00:21:02.236 "delay_cmd_submit": true, 00:21:02.236 "bdev_retry_count": 3, 00:21:02.236 "transport_ack_timeout": 0, 00:21:02.236 "ctrlr_loss_timeout_sec": 0, 00:21:02.236 "reconnect_delay_sec": 0, 00:21:02.236 "fast_io_fail_timeout_sec": 0, 00:21:02.236 "generate_uuids": false, 00:21:02.236 "transport_tos": 0, 00:21:02.236 "io_path_stat": false, 00:21:02.236 "allow_accel_sequence": false 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "bdev_nvme_set_hotplug", 00:21:02.236 "params": { 00:21:02.236 "period_us": 100000, 00:21:02.236 "enable": false 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "bdev_malloc_create", 00:21:02.236 "params": { 00:21:02.236 "name": "malloc0", 00:21:02.236 "num_blocks": 8192, 00:21:02.236 "block_size": 4096, 00:21:02.236 "physical_block_size": 4096, 00:21:02.236 "uuid": "1527a496-0fbd-4f50-a2b7-5c5971a13dad", 00:21:02.236 "optimal_io_boundary": 0 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "bdev_wait_for_examine" 00:21:02.236 } 00:21:02.236 ] 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "subsystem": "nbd", 00:21:02.236 "config": [] 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "subsystem": "scheduler", 00:21:02.236 "config": [ 00:21:02.236 { 00:21:02.236 "method": "framework_set_scheduler", 00:21:02.236 "params": { 00:21:02.236 "name": "static" 00:21:02.236 } 00:21:02.236 } 00:21:02.236 ] 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "subsystem": "nvmf", 00:21:02.236 "config": [ 00:21:02.236 { 00:21:02.236 "method": "nvmf_set_config", 00:21:02.236 "params": { 00:21:02.236 "discovery_filter": "match_any", 00:21:02.236 "admin_cmd_passthru": { 00:21:02.236 "identify_ctrlr": false 00:21:02.236 } 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "nvmf_set_max_subsystems", 00:21:02.236 "params": { 00:21:02.236 "max_subsystems": 1024 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "nvmf_set_crdt", 00:21:02.236 "params": { 00:21:02.236 "crdt1": 0, 00:21:02.236 "crdt2": 0, 00:21:02.236 "crdt3": 0 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "nvmf_create_transport", 00:21:02.236 "params": { 00:21:02.236 "trtype": "TCP", 00:21:02.236 "max_queue_depth": 128, 00:21:02.236 "max_io_qpairs_per_ctrlr": 127, 00:21:02.236 "in_capsule_data_size": 4096, 00:21:02.236 "max_io_size": 131072, 00:21:02.236 "io_unit_size": 131072, 00:21:02.236 "max_aq_depth": 128, 00:21:02.236 "num_shared_buffers": 511, 00:21:02.236 "buf_cache_size": 4294967295, 00:21:02.236 "dif_insert_or_strip": false, 00:21:02.236 "zcopy": false, 00:21:02.236 "c2h_success": false, 00:21:02.236 "sock_priority": 0, 00:21:02.236 "abort_timeout_sec": 1 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "nvmf_create_subsystem", 00:21:02.236 "params": { 00:21:02.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.236 "allow_any_host": false, 00:21:02.236 "serial_number": "SPDK00000000000001", 00:21:02.236 "model_number": "SPDK bdev Controller", 00:21:02.236 "max_namespaces": 10, 00:21:02.236 "min_cntlid": 1, 00:21:02.236 "max_cntlid": 65519, 00:21:02.236 "ana_reporting": false 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "nvmf_subsystem_add_host", 00:21:02.236 "params": { 00:21:02.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.236 "host": "nqn.2016-06.io.spdk:host1", 00:21:02.236 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "nvmf_subsystem_add_ns", 00:21:02.236 "params": { 00:21:02.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.236 "namespace": { 00:21:02.236 "nsid": 1, 00:21:02.236 "bdev_name": "malloc0", 00:21:02.236 "nguid": "1527A4960FBD4F50A2B75C5971A13DAD", 00:21:02.236 "uuid": "1527a496-0fbd-4f50-a2b7-5c5971a13dad" 00:21:02.236 } 00:21:02.236 } 00:21:02.236 }, 00:21:02.236 { 00:21:02.236 "method": "nvmf_subsystem_add_listener", 00:21:02.236 "params": { 00:21:02.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.236 "listen_address": { 00:21:02.236 "trtype": "TCP", 00:21:02.236 "adrfam": "IPv4", 00:21:02.236 "traddr": "10.0.0.2", 00:21:02.236 "trsvcid": "4420" 00:21:02.236 }, 00:21:02.236 "secure_channel": true 00:21:02.236 } 00:21:02.236 } 00:21:02.236 ] 00:21:02.236 } 00:21:02.236 ] 00:21:02.236 }' 00:21:02.236 15:06:20 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:02.496 15:06:21 -- target/tls.sh@206 -- # bdevperfconf='{ 00:21:02.496 "subsystems": [ 00:21:02.496 { 00:21:02.496 "subsystem": "iobuf", 00:21:02.496 "config": [ 00:21:02.496 { 00:21:02.496 "method": "iobuf_set_options", 00:21:02.496 "params": { 00:21:02.496 "small_pool_count": 8192, 00:21:02.496 "large_pool_count": 1024, 00:21:02.496 "small_bufsize": 8192, 00:21:02.496 "large_bufsize": 135168 00:21:02.496 } 00:21:02.496 } 00:21:02.496 ] 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "subsystem": "sock", 00:21:02.496 "config": [ 00:21:02.496 { 00:21:02.496 "method": "sock_impl_set_options", 00:21:02.496 "params": { 00:21:02.496 "impl_name": "posix", 00:21:02.496 "recv_buf_size": 2097152, 00:21:02.496 "send_buf_size": 2097152, 00:21:02.496 "enable_recv_pipe": true, 00:21:02.496 "enable_quickack": false, 00:21:02.496 "enable_placement_id": 0, 00:21:02.496 "enable_zerocopy_send_server": true, 00:21:02.496 "enable_zerocopy_send_client": false, 00:21:02.496 "zerocopy_threshold": 0, 00:21:02.496 "tls_version": 0, 00:21:02.496 "enable_ktls": false 00:21:02.496 } 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "method": "sock_impl_set_options", 00:21:02.496 "params": { 00:21:02.496 "impl_name": "ssl", 00:21:02.496 "recv_buf_size": 4096, 00:21:02.496 "send_buf_size": 4096, 00:21:02.496 "enable_recv_pipe": true, 00:21:02.496 "enable_quickack": false, 00:21:02.496 "enable_placement_id": 0, 00:21:02.496 "enable_zerocopy_send_server": true, 00:21:02.496 "enable_zerocopy_send_client": false, 00:21:02.496 "zerocopy_threshold": 0, 00:21:02.496 "tls_version": 0, 00:21:02.496 "enable_ktls": false 00:21:02.496 } 00:21:02.496 } 00:21:02.496 ] 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "subsystem": "vmd", 00:21:02.496 "config": [] 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "subsystem": "accel", 00:21:02.496 "config": [ 00:21:02.496 { 00:21:02.496 "method": "accel_set_options", 00:21:02.496 "params": { 00:21:02.496 "small_cache_size": 128, 00:21:02.496 "large_cache_size": 16, 00:21:02.496 "task_count": 2048, 00:21:02.496 "sequence_count": 2048, 00:21:02.496 "buf_count": 2048 00:21:02.496 } 00:21:02.496 } 00:21:02.496 ] 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "subsystem": "bdev", 00:21:02.496 "config": [ 00:21:02.496 { 00:21:02.496 "method": "bdev_set_options", 00:21:02.496 "params": { 00:21:02.496 "bdev_io_pool_size": 65535, 00:21:02.496 "bdev_io_cache_size": 256, 00:21:02.496 "bdev_auto_examine": true, 00:21:02.496 "iobuf_small_cache_size": 128, 00:21:02.496 "iobuf_large_cache_size": 16 00:21:02.496 } 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "method": "bdev_raid_set_options", 00:21:02.496 "params": { 00:21:02.496 "process_window_size_kb": 1024 00:21:02.496 } 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "method": "bdev_iscsi_set_options", 00:21:02.496 "params": { 00:21:02.496 "timeout_sec": 30 00:21:02.496 } 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "method": "bdev_nvme_set_options", 00:21:02.496 "params": { 00:21:02.496 "action_on_timeout": "none", 00:21:02.496 "timeout_us": 0, 00:21:02.496 "timeout_admin_us": 0, 00:21:02.496 "keep_alive_timeout_ms": 10000, 00:21:02.496 "transport_retry_count": 4, 00:21:02.496 "arbitration_burst": 0, 00:21:02.496 "low_priority_weight": 0, 00:21:02.496 "medium_priority_weight": 0, 00:21:02.496 "high_priority_weight": 0, 00:21:02.496 "nvme_adminq_poll_period_us": 10000, 00:21:02.496 "nvme_ioq_poll_period_us": 0, 00:21:02.496 "io_queue_requests": 512, 00:21:02.496 "delay_cmd_submit": true, 00:21:02.496 "bdev_retry_count": 3, 00:21:02.496 "transport_ack_timeout": 0, 00:21:02.496 "ctrlr_loss_timeout_sec": 0, 00:21:02.496 "reconnect_delay_sec": 0, 00:21:02.496 "fast_io_fail_timeout_sec": 0, 00:21:02.496 "generate_uuids": false, 00:21:02.496 "transport_tos": 0, 00:21:02.496 "io_path_stat": false, 00:21:02.496 "allow_accel_sequence": false 00:21:02.496 } 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "method": "bdev_nvme_attach_controller", 00:21:02.496 "params": { 00:21:02.496 "name": "TLSTEST", 00:21:02.496 "trtype": "TCP", 00:21:02.496 "adrfam": "IPv4", 00:21:02.496 "traddr": "10.0.0.2", 00:21:02.496 "trsvcid": "4420", 00:21:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.496 "prchk_reftag": false, 00:21:02.496 "prchk_guard": false, 00:21:02.496 "ctrlr_loss_timeout_sec": 0, 00:21:02.496 "reconnect_delay_sec": 0, 00:21:02.496 "fast_io_fail_timeout_sec": 0, 00:21:02.496 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:02.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.496 "hdgst": false, 00:21:02.496 "ddgst": false 00:21:02.496 } 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "method": "bdev_nvme_set_hotplug", 00:21:02.496 "params": { 00:21:02.496 "period_us": 100000, 00:21:02.496 "enable": false 00:21:02.496 } 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "method": "bdev_wait_for_examine" 00:21:02.496 } 00:21:02.496 ] 00:21:02.496 }, 00:21:02.496 { 00:21:02.496 "subsystem": "nbd", 00:21:02.496 "config": [] 00:21:02.496 } 00:21:02.496 ] 00:21:02.496 }' 00:21:02.496 15:06:21 -- target/tls.sh@208 -- # killprocess 3316957 00:21:02.496 15:06:21 -- common/autotest_common.sh@926 -- # '[' -z 3316957 ']' 00:21:02.496 15:06:21 -- common/autotest_common.sh@930 -- # kill -0 3316957 00:21:02.496 15:06:21 -- common/autotest_common.sh@931 -- # uname 00:21:02.496 15:06:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:02.496 15:06:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3316957 00:21:02.496 15:06:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:02.496 15:06:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:02.496 15:06:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3316957' 00:21:02.496 killing process with pid 3316957 00:21:02.496 15:06:21 -- common/autotest_common.sh@945 -- # kill 3316957 00:21:02.496 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.496 00:21:02.496 Latency(us) 00:21:02.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.496 =================================================================================================================== 00:21:02.496 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.496 15:06:21 -- common/autotest_common.sh@950 -- # wait 3316957 00:21:02.755 15:06:21 -- target/tls.sh@209 -- # killprocess 3316643 00:21:02.755 15:06:21 -- common/autotest_common.sh@926 -- # '[' -z 3316643 ']' 00:21:02.755 15:06:21 -- common/autotest_common.sh@930 -- # kill -0 3316643 00:21:02.755 15:06:21 -- common/autotest_common.sh@931 -- # uname 00:21:02.755 15:06:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:02.755 15:06:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3316643 00:21:02.755 15:06:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:02.755 15:06:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:02.755 15:06:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3316643' 00:21:02.755 killing process with pid 3316643 00:21:02.755 15:06:21 -- common/autotest_common.sh@945 -- # kill 3316643 00:21:02.755 15:06:21 -- common/autotest_common.sh@950 -- # wait 3316643 00:21:03.015 15:06:21 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:03.015 15:06:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:03.015 15:06:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:03.015 15:06:21 -- target/tls.sh@212 -- # echo '{ 00:21:03.015 "subsystems": [ 00:21:03.015 { 00:21:03.015 "subsystem": "iobuf", 00:21:03.015 "config": [ 00:21:03.015 { 00:21:03.015 "method": "iobuf_set_options", 00:21:03.015 "params": { 00:21:03.015 "small_pool_count": 8192, 00:21:03.015 "large_pool_count": 1024, 00:21:03.015 "small_bufsize": 8192, 00:21:03.015 "large_bufsize": 135168 00:21:03.015 } 00:21:03.015 } 00:21:03.015 ] 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "subsystem": "sock", 00:21:03.015 "config": [ 00:21:03.015 { 00:21:03.015 "method": "sock_impl_set_options", 00:21:03.015 "params": { 00:21:03.015 "impl_name": "posix", 00:21:03.015 "recv_buf_size": 2097152, 00:21:03.015 "send_buf_size": 2097152, 00:21:03.015 "enable_recv_pipe": true, 00:21:03.015 "enable_quickack": false, 00:21:03.015 "enable_placement_id": 0, 00:21:03.015 "enable_zerocopy_send_server": true, 00:21:03.015 "enable_zerocopy_send_client": false, 00:21:03.015 "zerocopy_threshold": 0, 00:21:03.015 "tls_version": 0, 00:21:03.015 "enable_ktls": false 00:21:03.015 } 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "method": "sock_impl_set_options", 00:21:03.015 "params": { 00:21:03.015 "impl_name": "ssl", 00:21:03.015 "recv_buf_size": 4096, 00:21:03.015 "send_buf_size": 4096, 00:21:03.015 "enable_recv_pipe": true, 00:21:03.015 "enable_quickack": false, 00:21:03.015 "enable_placement_id": 0, 00:21:03.015 "enable_zerocopy_send_server": true, 00:21:03.015 "enable_zerocopy_send_client": false, 00:21:03.015 "zerocopy_threshold": 0, 00:21:03.015 "tls_version": 0, 00:21:03.015 "enable_ktls": false 00:21:03.015 } 00:21:03.015 } 00:21:03.015 ] 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "subsystem": "vmd", 00:21:03.015 "config": [] 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "subsystem": "accel", 00:21:03.015 "config": [ 00:21:03.015 { 00:21:03.015 "method": "accel_set_options", 00:21:03.015 "params": { 00:21:03.015 "small_cache_size": 128, 00:21:03.015 "large_cache_size": 16, 00:21:03.015 "task_count": 2048, 00:21:03.015 "sequence_count": 2048, 00:21:03.015 "buf_count": 2048 00:21:03.015 } 00:21:03.015 } 00:21:03.015 ] 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "subsystem": "bdev", 00:21:03.015 "config": [ 00:21:03.015 { 00:21:03.015 "method": "bdev_set_options", 00:21:03.015 "params": { 00:21:03.015 "bdev_io_pool_size": 65535, 00:21:03.015 "bdev_io_cache_size": 256, 00:21:03.015 "bdev_auto_examine": true, 00:21:03.015 "iobuf_small_cache_size": 128, 00:21:03.015 "iobuf_large_cache_size": 16 00:21:03.015 } 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "method": "bdev_raid_set_options", 00:21:03.015 "params": { 00:21:03.015 "process_window_size_kb": 1024 00:21:03.015 } 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "method": "bdev_iscsi_set_options", 00:21:03.015 "params": { 00:21:03.015 "timeout_sec": 30 00:21:03.015 } 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "method": "bdev_nvme_set_options", 00:21:03.015 "params": { 00:21:03.015 "action_on_timeout": "none", 00:21:03.015 "timeout_us": 0, 00:21:03.015 "timeout_admin_us": 0, 00:21:03.015 "keep_alive_timeout_ms": 10000, 00:21:03.015 "transport_retry_count": 4, 00:21:03.015 "arbitration_burst": 0, 00:21:03.015 "low_priority_weight": 0, 00:21:03.015 "medium_priority_weight": 0, 00:21:03.015 "high_priority_weight": 0, 00:21:03.015 "nvme_adminq_poll_period_us": 10000, 00:21:03.015 "nvme_ioq_poll_period_us": 0, 00:21:03.015 "io_queue_requests": 0, 00:21:03.015 "delay_cmd_submit": true, 00:21:03.015 "bdev_retry_count": 3, 00:21:03.015 "transport_ack_timeout": 0, 00:21:03.015 "ctrlr_loss_timeout_sec": 0, 00:21:03.015 "reconnect_delay_sec": 0, 00:21:03.015 "fast_io_fail_timeout_sec": 0, 00:21:03.015 "generate_uuids": false, 00:21:03.015 "transport_tos": 0, 00:21:03.015 "io_path_stat": false, 00:21:03.015 "allow_accel_sequence": false 00:21:03.015 } 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "method": "bdev_nvme_set_hotplug", 00:21:03.015 "params": { 00:21:03.015 "period_us": 100000, 00:21:03.015 "enable": false 00:21:03.015 } 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "method": "bdev_malloc_create", 00:21:03.015 "params": { 00:21:03.015 "name": "malloc0", 00:21:03.015 "num_blocks": 8192, 00:21:03.015 "block_size": 4096, 00:21:03.015 "physical_block_size": 4096, 00:21:03.015 "uuid": "1527a496-0fbd-4f50-a2b7-5c5971a13dad", 00:21:03.015 "optimal_io_boundary": 0 00:21:03.015 } 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "method": "bdev_wait_for_examine" 00:21:03.015 } 00:21:03.015 ] 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "subsystem": "nbd", 00:21:03.015 "config": [] 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "subsystem": "scheduler", 00:21:03.015 "config": [ 00:21:03.015 { 00:21:03.015 "method": "framework_set_scheduler", 00:21:03.015 "params": { 00:21:03.015 "name": "static" 00:21:03.015 } 00:21:03.015 } 00:21:03.015 ] 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "subsystem": "nvmf", 00:21:03.015 "config": [ 00:21:03.015 { 00:21:03.015 "method": "nvmf_set_config", 00:21:03.015 "params": { 00:21:03.015 "discovery_filter": "match_any", 00:21:03.015 "admin_cmd_passthru": { 00:21:03.015 "identify_ctrlr": false 00:21:03.015 } 00:21:03.015 } 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "method": "nvmf_set_max_subsystems", 00:21:03.015 "params": { 00:21:03.015 "max_subsystems": 1024 00:21:03.015 } 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "method": "nvmf_set_crdt", 00:21:03.015 "params": { 00:21:03.015 "crdt1": 0, 00:21:03.015 "crdt2": 0, 00:21:03.015 "crdt3": 0 00:21:03.015 } 00:21:03.015 }, 00:21:03.015 { 00:21:03.015 "method": "nvmf_create_transport", 00:21:03.015 "params": { 00:21:03.015 "trtype": "TCP", 00:21:03.015 "max_queue_depth": 128, 00:21:03.015 "max_io_qpairs_per_ctrlr": 127, 00:21:03.016 "in_capsule_data_size": 4096, 00:21:03.016 "max_io_size": 131072, 00:21:03.016 "io_unit_size": 131072, 00:21:03.016 "max_aq_depth": 128, 00:21:03.016 "num_shared_buffers": 511, 00:21:03.016 "buf_cache_size": 4294967295, 00:21:03.016 "dif_insert_or_strip": false, 00:21:03.016 "zcopy": false, 00:21:03.016 "c2h_success": false, 00:21:03.016 "sock_priority": 0, 00:21:03.016 "abort_timeout_sec": 1 00:21:03.016 } 00:21:03.016 }, 00:21:03.016 { 00:21:03.016 "method": "nvmf_create_subsystem", 00:21:03.016 "params": { 00:21:03.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.016 "allow_any_host": false, 00:21:03.016 "serial_number": "SPDK00000000000001", 00:21:03.016 "model_number": "SPDK bdev Controller", 00:21:03.016 "max_namespaces": 10, 00:21:03.016 "min_cntlid": 1, 00:21:03.016 "max_cntlid": 65519, 00:21:03.016 "ana_reporting": false 00:21:03.016 } 00:21:03.016 }, 00:21:03.016 { 00:21:03.016 "method": "nvmf_subsystem_add_host", 00:21:03.016 "params": { 00:21:03.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.016 "host": "nqn.2016-06.io.spdk:host1", 00:21:03.016 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:03.016 } 00:21:03.016 }, 00:21:03.016 { 00:21:03.016 "method": "nvmf_subsystem_add_ns", 00:21:03.016 "params": { 00:21:03.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.016 "namespace": { 00:21:03.016 "nsid": 1, 00:21:03.016 "bdev_name": "malloc0", 00:21:03.016 "nguid": "1527A4960FBD4F50A2B75C5971A13DAD", 00:21:03.016 "uuid": "1527a496-0fbd-4f50-a2b7-5c5971a13dad" 00:21:03.016 } 00:21:03.016 } 00:21:03.016 }, 00:21:03.016 { 00:21:03.016 "method": "nvmf_subsystem_add_listener", 00:21:03.016 "params": { 00:21:03.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.016 "listen_address": { 00:21:03.016 "trtype": "TCP", 00:21:03.016 "adrfam": "IPv4", 00:21:03.016 "traddr": "10.0.0.2", 00:21:03.016 "trsvcid": "4420" 00:21:03.016 }, 00:21:03.016 "secure_channel": true 00:21:03.016 } 00:21:03.016 } 00:21:03.016 ] 00:21:03.016 } 00:21:03.016 ] 00:21:03.016 }' 00:21:03.016 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:21:03.016 15:06:21 -- nvmf/common.sh@469 -- # nvmfpid=3317501 00:21:03.016 15:06:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:03.016 15:06:21 -- nvmf/common.sh@470 -- # waitforlisten 3317501 00:21:03.016 15:06:21 -- common/autotest_common.sh@819 -- # '[' -z 3317501 ']' 00:21:03.016 15:06:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.016 15:06:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:03.016 15:06:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.016 15:06:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:03.016 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:21:03.016 [2024-06-11 15:06:21.772490] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:03.016 [2024-06-11 15:06:21.772548] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.016 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.275 [2024-06-11 15:06:21.858942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.275 [2024-06-11 15:06:21.944320] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:03.275 [2024-06-11 15:06:21.944461] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.275 [2024-06-11 15:06:21.944472] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.275 [2024-06-11 15:06:21.944481] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.275 [2024-06-11 15:06:21.944502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.532 [2024-06-11 15:06:22.145595] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.533 [2024-06-11 15:06:22.177606] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.533 [2024-06-11 15:06:22.177802] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.099 15:06:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:04.099 15:06:22 -- common/autotest_common.sh@852 -- # return 0 00:21:04.099 15:06:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:04.099 15:06:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:04.099 15:06:22 -- common/autotest_common.sh@10 -- # set +x 00:21:04.099 15:06:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.099 15:06:22 -- target/tls.sh@216 -- # bdevperf_pid=3317741 00:21:04.099 15:06:22 -- target/tls.sh@217 -- # waitforlisten 3317741 /var/tmp/bdevperf.sock 00:21:04.099 15:06:22 -- common/autotest_common.sh@819 -- # '[' -z 3317741 ']' 00:21:04.099 15:06:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.099 15:06:22 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:04.099 15:06:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:04.099 15:06:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.099 15:06:22 -- target/tls.sh@213 -- # echo '{ 00:21:04.099 "subsystems": [ 00:21:04.099 { 00:21:04.099 "subsystem": "iobuf", 00:21:04.099 "config": [ 00:21:04.099 { 00:21:04.099 "method": "iobuf_set_options", 00:21:04.099 "params": { 00:21:04.099 "small_pool_count": 8192, 00:21:04.099 "large_pool_count": 1024, 00:21:04.099 "small_bufsize": 8192, 00:21:04.099 "large_bufsize": 135168 00:21:04.099 } 00:21:04.099 } 00:21:04.099 ] 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "subsystem": "sock", 00:21:04.099 "config": [ 00:21:04.099 { 00:21:04.099 "method": "sock_impl_set_options", 00:21:04.099 "params": { 00:21:04.099 "impl_name": "posix", 00:21:04.099 "recv_buf_size": 2097152, 00:21:04.099 "send_buf_size": 2097152, 00:21:04.099 "enable_recv_pipe": true, 00:21:04.099 "enable_quickack": false, 00:21:04.099 "enable_placement_id": 0, 00:21:04.099 "enable_zerocopy_send_server": true, 00:21:04.099 "enable_zerocopy_send_client": false, 00:21:04.099 "zerocopy_threshold": 0, 00:21:04.099 "tls_version": 0, 00:21:04.099 "enable_ktls": false 00:21:04.099 } 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "method": "sock_impl_set_options", 00:21:04.099 "params": { 00:21:04.099 "impl_name": "ssl", 00:21:04.099 "recv_buf_size": 4096, 00:21:04.099 "send_buf_size": 4096, 00:21:04.099 "enable_recv_pipe": true, 00:21:04.099 "enable_quickack": false, 00:21:04.099 "enable_placement_id": 0, 00:21:04.099 "enable_zerocopy_send_server": true, 00:21:04.099 "enable_zerocopy_send_client": false, 00:21:04.099 "zerocopy_threshold": 0, 00:21:04.099 "tls_version": 0, 00:21:04.099 "enable_ktls": false 00:21:04.099 } 00:21:04.099 } 00:21:04.099 ] 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "subsystem": "vmd", 00:21:04.099 "config": [] 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "subsystem": "accel", 00:21:04.099 "config": [ 00:21:04.099 { 00:21:04.099 "method": "accel_set_options", 00:21:04.099 "params": { 00:21:04.099 "small_cache_size": 128, 00:21:04.099 "large_cache_size": 16, 00:21:04.099 "task_count": 2048, 00:21:04.099 "sequence_count": 2048, 00:21:04.099 "buf_count": 2048 00:21:04.099 } 00:21:04.099 } 00:21:04.099 ] 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "subsystem": "bdev", 00:21:04.099 "config": [ 00:21:04.099 { 00:21:04.099 "method": "bdev_set_options", 00:21:04.099 "params": { 00:21:04.099 "bdev_io_pool_size": 65535, 00:21:04.099 "bdev_io_cache_size": 256, 00:21:04.099 "bdev_auto_examine": true, 00:21:04.099 "iobuf_small_cache_size": 128, 00:21:04.099 "iobuf_large_cache_size": 16 00:21:04.099 } 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "method": "bdev_raid_set_options", 00:21:04.099 "params": { 00:21:04.099 "process_window_size_kb": 1024 00:21:04.099 } 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "method": "bdev_iscsi_set_options", 00:21:04.099 "params": { 00:21:04.099 "timeout_sec": 30 00:21:04.099 } 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "method": "bdev_nvme_set_options", 00:21:04.099 "params": { 00:21:04.099 "action_on_timeout": "none", 00:21:04.099 "timeout_us": 0, 00:21:04.099 "timeout_admin_us": 0, 00:21:04.099 "keep_alive_timeout_ms": 10000, 00:21:04.099 "transport_retry_count": 4, 00:21:04.099 "arbitration_burst": 0, 00:21:04.099 "low_priority_weight": 0, 00:21:04.099 "medium_priority_weight": 0, 00:21:04.099 "high_priority_weight": 0, 00:21:04.099 "nvme_adminq_poll_period_us": 10000, 00:21:04.099 "nvme_ioq_poll_period_us": 0, 00:21:04.099 "io_queue_requests": 512, 00:21:04.099 "delay_cmd_submit": true, 00:21:04.099 "bdev_retry_count": 3, 00:21:04.099 "transport_ack_timeout": 0, 00:21:04.099 "ctrlr_loss_timeout_sec": 0, 00:21:04.099 "reconnect_delay_sec": 0, 00:21:04.099 "fast_io_fail_timeout_sec": 0, 00:21:04.099 "generate_uuids": false, 00:21:04.099 "transport_tos": 0, 00:21:04.099 "io_path_stat": false, 00:21:04.099 "allow_accel_sequence": false 00:21:04.099 } 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "method": "bdev_nvme_attach_controller", 00:21:04.099 "params": { 00:21:04.099 "name": "TLSTEST", 00:21:04.099 "trtype": "TCP", 00:21:04.099 "adrfam": "IPv4", 00:21:04.099 "traddr": "10.0.0.2", 00:21:04.099 "trsvcid": "4420", 00:21:04.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.099 "prchk_reftag": false, 00:21:04.099 "prchk_guard": false, 00:21:04.099 "ctrlr_loss_timeout_sec": 0, 00:21:04.099 "reconnect_delay_sec": 0, 00:21:04.099 "fast_io_fail_timeout_sec": 0, 00:21:04.099 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:04.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.099 "hdgst": false, 00:21:04.099 "ddgst": false 00:21:04.099 } 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "method": "bdev_nvme_set_hotplug", 00:21:04.099 "params": { 00:21:04.099 "period_us": 100000, 00:21:04.099 "enable": false 00:21:04.099 } 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "method": "bdev_wait_for_examine" 00:21:04.099 } 00:21:04.099 ] 00:21:04.099 }, 00:21:04.099 { 00:21:04.099 "subsystem": "nbd", 00:21:04.099 "config": [] 00:21:04.099 } 00:21:04.099 ] 00:21:04.099 }' 00:21:04.099 15:06:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:04.099 15:06:22 -- common/autotest_common.sh@10 -- # set +x 00:21:04.099 [2024-06-11 15:06:22.783917] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:04.099 [2024-06-11 15:06:22.783977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317741 ] 00:21:04.099 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.099 [2024-06-11 15:06:22.847183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.099 [2024-06-11 15:06:22.913101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.358 [2024-06-11 15:06:23.045433] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.924 15:06:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:04.924 15:06:23 -- common/autotest_common.sh@852 -- # return 0 00:21:04.924 15:06:23 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:05.182 Running I/O for 10 seconds... 00:21:15.160 00:21:15.160 Latency(us) 00:21:15.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.160 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:15.160 Verification LBA range: start 0x0 length 0x2000 00:21:15.160 TLSTESTn1 : 10.03 2563.59 10.01 0.00 0.00 49877.04 3425.75 60293.12 00:21:15.160 =================================================================================================================== 00:21:15.160 Total : 2563.59 10.01 0.00 0.00 49877.04 3425.75 60293.12 00:21:15.160 0 00:21:15.160 15:06:33 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.160 15:06:33 -- target/tls.sh@223 -- # killprocess 3317741 00:21:15.160 15:06:33 -- common/autotest_common.sh@926 -- # '[' -z 3317741 ']' 00:21:15.160 15:06:33 -- common/autotest_common.sh@930 -- # kill -0 3317741 00:21:15.160 15:06:33 -- common/autotest_common.sh@931 -- # uname 00:21:15.160 15:06:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:15.160 15:06:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3317741 00:21:15.160 15:06:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:15.161 15:06:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:15.161 15:06:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3317741' 00:21:15.161 killing process with pid 3317741 00:21:15.161 15:06:33 -- common/autotest_common.sh@945 -- # kill 3317741 00:21:15.161 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.161 00:21:15.161 Latency(us) 00:21:15.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.161 =================================================================================================================== 00:21:15.161 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.161 15:06:33 -- common/autotest_common.sh@950 -- # wait 3317741 00:21:15.420 15:06:34 -- target/tls.sh@224 -- # killprocess 3317501 00:21:15.420 15:06:34 -- common/autotest_common.sh@926 -- # '[' -z 3317501 ']' 00:21:15.420 15:06:34 -- common/autotest_common.sh@930 -- # kill -0 3317501 00:21:15.420 15:06:34 -- common/autotest_common.sh@931 -- # uname 00:21:15.420 15:06:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:15.420 15:06:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3317501 00:21:15.420 15:06:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:15.420 15:06:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:15.420 15:06:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3317501' 00:21:15.420 killing process with pid 3317501 00:21:15.420 15:06:34 -- common/autotest_common.sh@945 -- # kill 3317501 00:21:15.420 15:06:34 -- common/autotest_common.sh@950 -- # wait 3317501 00:21:15.679 15:06:34 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:21:15.679 15:06:34 -- target/tls.sh@227 -- # cleanup 00:21:15.679 15:06:34 -- target/tls.sh@15 -- # process_shm --id 0 00:21:15.679 15:06:34 -- common/autotest_common.sh@796 -- # type=--id 00:21:15.679 15:06:34 -- common/autotest_common.sh@797 -- # id=0 00:21:15.679 15:06:34 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:15.679 15:06:34 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:15.679 15:06:34 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:15.679 15:06:34 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:15.679 15:06:34 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:15.679 15:06:34 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:15.679 nvmf_trace.0 00:21:15.679 15:06:34 -- common/autotest_common.sh@811 -- # return 0 00:21:15.679 15:06:34 -- target/tls.sh@16 -- # killprocess 3317741 00:21:15.679 15:06:34 -- common/autotest_common.sh@926 -- # '[' -z 3317741 ']' 00:21:15.679 15:06:34 -- common/autotest_common.sh@930 -- # kill -0 3317741 00:21:15.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3317741) - No such process 00:21:15.679 15:06:34 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3317741 is not found' 00:21:15.679 Process with pid 3317741 is not found 00:21:15.679 15:06:34 -- target/tls.sh@17 -- # nvmftestfini 00:21:15.679 15:06:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:15.679 15:06:34 -- nvmf/common.sh@116 -- # sync 00:21:15.679 15:06:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:15.679 15:06:34 -- nvmf/common.sh@119 -- # set +e 00:21:15.679 15:06:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:15.679 15:06:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:15.679 rmmod nvme_tcp 00:21:15.938 rmmod nvme_fabrics 00:21:15.938 rmmod nvme_keyring 00:21:15.938 15:06:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:15.938 15:06:34 -- nvmf/common.sh@123 -- # set -e 00:21:15.938 15:06:34 -- nvmf/common.sh@124 -- # return 0 00:21:15.938 15:06:34 -- nvmf/common.sh@477 -- # '[' -n 3317501 ']' 00:21:15.938 15:06:34 -- nvmf/common.sh@478 -- # killprocess 3317501 00:21:15.938 15:06:34 -- common/autotest_common.sh@926 -- # '[' -z 3317501 ']' 00:21:15.938 15:06:34 -- common/autotest_common.sh@930 -- # kill -0 3317501 00:21:15.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3317501) - No such process 00:21:15.938 15:06:34 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3317501 is not found' 00:21:15.938 Process with pid 3317501 is not found 00:21:15.938 15:06:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:15.938 15:06:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:15.938 15:06:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:15.938 15:06:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:15.938 15:06:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:15.938 15:06:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.938 15:06:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.938 15:06:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.841 15:06:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:17.841 15:06:36 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:17.841 00:21:17.841 real 1m18.699s 00:21:17.841 user 2m0.854s 00:21:17.841 sys 0m26.975s 00:21:17.841 15:06:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:17.841 15:06:36 -- common/autotest_common.sh@10 -- # set +x 00:21:17.841 ************************************ 00:21:17.841 END TEST nvmf_tls 00:21:17.841 ************************************ 00:21:17.841 15:06:36 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:17.841 15:06:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:17.841 15:06:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:17.841 15:06:36 -- common/autotest_common.sh@10 -- # set +x 00:21:17.841 ************************************ 00:21:17.841 START TEST nvmf_fips 00:21:17.841 ************************************ 00:21:17.841 15:06:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:18.100 * Looking for test storage... 00:21:18.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:18.100 15:06:36 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.100 15:06:36 -- nvmf/common.sh@7 -- # uname -s 00:21:18.100 15:06:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.100 15:06:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.100 15:06:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.100 15:06:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.100 15:06:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.100 15:06:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.100 15:06:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.100 15:06:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.100 15:06:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.100 15:06:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.100 15:06:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:18.100 15:06:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:18.100 15:06:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.100 15:06:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.100 15:06:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.100 15:06:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:18.100 15:06:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.100 15:06:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.100 15:06:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.100 15:06:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.100 15:06:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.100 15:06:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.100 15:06:36 -- paths/export.sh@5 -- # export PATH 00:21:18.100 15:06:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.100 15:06:36 -- nvmf/common.sh@46 -- # : 0 00:21:18.100 15:06:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:18.100 15:06:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:18.100 15:06:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:18.100 15:06:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.100 15:06:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.100 15:06:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:18.100 15:06:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:18.100 15:06:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:18.100 15:06:36 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:18.100 15:06:36 -- fips/fips.sh@89 -- # check_openssl_version 00:21:18.100 15:06:36 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:18.100 15:06:36 -- fips/fips.sh@85 -- # openssl version 00:21:18.100 15:06:36 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:18.100 15:06:36 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:18.100 15:06:36 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:18.100 15:06:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:18.100 15:06:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:18.100 15:06:36 -- scripts/common.sh@335 -- # IFS=.-: 00:21:18.100 15:06:36 -- scripts/common.sh@335 -- # read -ra ver1 00:21:18.100 15:06:36 -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.100 15:06:36 -- scripts/common.sh@336 -- # read -ra ver2 00:21:18.100 15:06:36 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:18.100 15:06:36 -- scripts/common.sh@339 -- # ver1_l=3 00:21:18.100 15:06:36 -- scripts/common.sh@340 -- # ver2_l=3 00:21:18.100 15:06:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:18.100 15:06:36 -- scripts/common.sh@343 -- # case "$op" in 00:21:18.100 15:06:36 -- scripts/common.sh@347 -- # : 1 00:21:18.100 15:06:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:18.100 15:06:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.100 15:06:36 -- scripts/common.sh@364 -- # decimal 3 00:21:18.100 15:06:36 -- scripts/common.sh@352 -- # local d=3 00:21:18.100 15:06:36 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:18.100 15:06:36 -- scripts/common.sh@354 -- # echo 3 00:21:18.100 15:06:36 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:18.100 15:06:36 -- scripts/common.sh@365 -- # decimal 3 00:21:18.100 15:06:36 -- scripts/common.sh@352 -- # local d=3 00:21:18.100 15:06:36 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:18.100 15:06:36 -- scripts/common.sh@354 -- # echo 3 00:21:18.101 15:06:36 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:18.101 15:06:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:18.101 15:06:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:18.101 15:06:36 -- scripts/common.sh@363 -- # (( v++ )) 00:21:18.101 15:06:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.101 15:06:36 -- scripts/common.sh@364 -- # decimal 0 00:21:18.101 15:06:36 -- scripts/common.sh@352 -- # local d=0 00:21:18.101 15:06:36 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:18.101 15:06:36 -- scripts/common.sh@354 -- # echo 0 00:21:18.101 15:06:36 -- scripts/common.sh@364 -- # ver1[v]=0 00:21:18.101 15:06:36 -- scripts/common.sh@365 -- # decimal 0 00:21:18.101 15:06:36 -- scripts/common.sh@352 -- # local d=0 00:21:18.101 15:06:36 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:18.101 15:06:36 -- scripts/common.sh@354 -- # echo 0 00:21:18.101 15:06:36 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:18.101 15:06:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:18.101 15:06:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:18.101 15:06:36 -- scripts/common.sh@363 -- # (( v++ )) 00:21:18.101 15:06:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.101 15:06:36 -- scripts/common.sh@364 -- # decimal 9 00:21:18.101 15:06:36 -- scripts/common.sh@352 -- # local d=9 00:21:18.101 15:06:36 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:18.101 15:06:36 -- scripts/common.sh@354 -- # echo 9 00:21:18.101 15:06:36 -- scripts/common.sh@364 -- # ver1[v]=9 00:21:18.101 15:06:36 -- scripts/common.sh@365 -- # decimal 0 00:21:18.101 15:06:36 -- scripts/common.sh@352 -- # local d=0 00:21:18.101 15:06:36 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:18.101 15:06:36 -- scripts/common.sh@354 -- # echo 0 00:21:18.101 15:06:36 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:18.101 15:06:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:18.101 15:06:36 -- scripts/common.sh@366 -- # return 0 00:21:18.101 15:06:36 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:18.101 15:06:36 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:18.101 15:06:36 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:18.101 15:06:36 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:18.101 15:06:36 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:18.101 15:06:36 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:18.101 15:06:36 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:18.101 15:06:36 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:18.101 15:06:36 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:18.101 15:06:36 -- fips/fips.sh@114 -- # build_openssl_config 00:21:18.101 15:06:36 -- fips/fips.sh@37 -- # cat 00:21:18.101 15:06:36 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:18.101 15:06:36 -- fips/fips.sh@58 -- # cat - 00:21:18.101 15:06:36 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:18.101 15:06:36 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:18.101 15:06:36 -- fips/fips.sh@117 -- # mapfile -t providers 00:21:18.101 15:06:36 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:21:18.101 15:06:36 -- fips/fips.sh@117 -- # openssl list -providers 00:21:18.101 15:06:36 -- fips/fips.sh@117 -- # grep name 00:21:18.101 15:06:36 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:18.101 15:06:36 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:18.101 15:06:36 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:18.101 15:06:36 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:18.101 15:06:36 -- fips/fips.sh@128 -- # : 00:21:18.101 15:06:36 -- common/autotest_common.sh@640 -- # local es=0 00:21:18.101 15:06:36 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:18.101 15:06:36 -- common/autotest_common.sh@628 -- # local arg=openssl 00:21:18.101 15:06:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.101 15:06:36 -- common/autotest_common.sh@632 -- # type -t openssl 00:21:18.101 15:06:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.101 15:06:36 -- common/autotest_common.sh@634 -- # type -P openssl 00:21:18.101 15:06:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.101 15:06:36 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:21:18.101 15:06:36 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:21:18.101 15:06:36 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:21:18.101 Error setting digest 00:21:18.101 00622DE3EC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:18.101 00622DE3EC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:18.101 15:06:36 -- common/autotest_common.sh@643 -- # es=1 00:21:18.101 15:06:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:18.101 15:06:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:18.101 15:06:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:18.101 15:06:36 -- fips/fips.sh@131 -- # nvmftestinit 00:21:18.101 15:06:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:18.101 15:06:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.101 15:06:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:18.101 15:06:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:18.101 15:06:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:18.101 15:06:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.101 15:06:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.101 15:06:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.360 15:06:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:18.360 15:06:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:18.360 15:06:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:18.360 15:06:36 -- common/autotest_common.sh@10 -- # set +x 00:21:24.924 15:06:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:24.924 15:06:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:24.924 15:06:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:24.924 15:06:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:24.924 15:06:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:24.924 15:06:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:24.924 15:06:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:24.924 15:06:43 -- nvmf/common.sh@294 -- # net_devs=() 00:21:24.924 15:06:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:24.924 15:06:43 -- nvmf/common.sh@295 -- # e810=() 00:21:24.924 15:06:43 -- nvmf/common.sh@295 -- # local -ga e810 00:21:24.924 15:06:43 -- nvmf/common.sh@296 -- # x722=() 00:21:24.924 15:06:43 -- nvmf/common.sh@296 -- # local -ga x722 00:21:24.924 15:06:43 -- nvmf/common.sh@297 -- # mlx=() 00:21:24.924 15:06:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:24.924 15:06:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.924 15:06:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:24.924 15:06:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:24.924 15:06:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:24.924 15:06:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:24.924 15:06:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:24.924 15:06:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:24.924 15:06:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:24.925 15:06:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:24.925 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:24.925 15:06:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:24.925 15:06:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:24.925 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:24.925 15:06:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:24.925 15:06:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:24.925 15:06:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.925 15:06:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:24.925 15:06:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.925 15:06:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:24.925 Found net devices under 0000:af:00.0: cvl_0_0 00:21:24.925 15:06:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.925 15:06:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:24.925 15:06:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.925 15:06:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:24.925 15:06:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.925 15:06:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:24.925 Found net devices under 0000:af:00.1: cvl_0_1 00:21:24.925 15:06:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.925 15:06:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:24.925 15:06:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:24.925 15:06:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:24.925 15:06:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.925 15:06:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.925 15:06:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.925 15:06:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:24.925 15:06:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.925 15:06:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.925 15:06:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:24.925 15:06:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.925 15:06:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.925 15:06:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:24.925 15:06:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:24.925 15:06:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.925 15:06:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.925 15:06:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.925 15:06:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.925 15:06:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:24.925 15:06:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.925 15:06:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.925 15:06:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.925 15:06:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:24.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:21:24.925 00:21:24.925 --- 10.0.0.2 ping statistics --- 00:21:24.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.925 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:24.925 15:06:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:24.925 00:21:24.925 --- 10.0.0.1 ping statistics --- 00:21:24.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.925 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:24.925 15:06:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.925 15:06:43 -- nvmf/common.sh@410 -- # return 0 00:21:24.925 15:06:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:24.925 15:06:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.925 15:06:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:24.925 15:06:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.925 15:06:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:24.925 15:06:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:24.925 15:06:43 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:24.925 15:06:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:24.925 15:06:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:24.925 15:06:43 -- common/autotest_common.sh@10 -- # set +x 00:21:24.925 15:06:43 -- nvmf/common.sh@469 -- # nvmfpid=3323783 00:21:24.925 15:06:43 -- nvmf/common.sh@470 -- # waitforlisten 3323783 00:21:24.925 15:06:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:24.925 15:06:43 -- common/autotest_common.sh@819 -- # '[' -z 3323783 ']' 00:21:24.925 15:06:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.925 15:06:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:24.925 15:06:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.925 15:06:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:24.925 15:06:43 -- common/autotest_common.sh@10 -- # set +x 00:21:24.925 [2024-06-11 15:06:43.441134] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:24.925 [2024-06-11 15:06:43.441191] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.925 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.925 [2024-06-11 15:06:43.529107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.925 [2024-06-11 15:06:43.618630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:24.925 [2024-06-11 15:06:43.618764] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.925 [2024-06-11 15:06:43.618776] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.925 [2024-06-11 15:06:43.618784] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.925 [2024-06-11 15:06:43.618813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.492 15:06:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:25.492 15:06:44 -- common/autotest_common.sh@852 -- # return 0 00:21:25.492 15:06:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:25.492 15:06:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:25.492 15:06:44 -- common/autotest_common.sh@10 -- # set +x 00:21:25.492 15:06:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.492 15:06:44 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:25.492 15:06:44 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:25.492 15:06:44 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:25.492 15:06:44 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:25.492 15:06:44 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:25.492 15:06:44 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:25.492 15:06:44 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:25.492 15:06:44 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:25.751 [2024-06-11 15:06:44.504845] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.751 [2024-06-11 15:06:44.520853] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.751 [2024-06-11 15:06:44.521046] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.751 malloc0 00:21:25.751 15:06:44 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.751 15:06:44 -- fips/fips.sh@148 -- # bdevperf_pid=3324002 00:21:25.751 15:06:44 -- fips/fips.sh@149 -- # waitforlisten 3324002 /var/tmp/bdevperf.sock 00:21:25.751 15:06:44 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:25.751 15:06:44 -- common/autotest_common.sh@819 -- # '[' -z 3324002 ']' 00:21:25.751 15:06:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.751 15:06:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:25.751 15:06:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.751 15:06:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:25.751 15:06:44 -- common/autotest_common.sh@10 -- # set +x 00:21:26.010 [2024-06-11 15:06:44.649954] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:26.010 [2024-06-11 15:06:44.650017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324002 ] 00:21:26.010 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.010 [2024-06-11 15:06:44.728338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.010 [2024-06-11 15:06:44.797292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.945 15:06:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:26.945 15:06:45 -- common/autotest_common.sh@852 -- # return 0 00:21:26.945 15:06:45 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:26.945 [2024-06-11 15:06:45.703187] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.945 TLSTESTn1 00:21:27.204 15:06:45 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.204 Running I/O for 10 seconds... 00:21:37.179 00:21:37.179 Latency(us) 00:21:37.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.179 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:37.179 Verification LBA range: start 0x0 length 0x2000 00:21:37.179 TLSTESTn1 : 10.04 2597.13 10.15 0.00 0.00 49202.04 7119.59 62437.93 00:21:37.179 =================================================================================================================== 00:21:37.179 Total : 2597.13 10.15 0.00 0.00 49202.04 7119.59 62437.93 00:21:37.179 0 00:21:37.179 15:06:55 -- fips/fips.sh@1 -- # cleanup 00:21:37.179 15:06:55 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:37.179 15:06:56 -- common/autotest_common.sh@796 -- # type=--id 00:21:37.179 15:06:56 -- common/autotest_common.sh@797 -- # id=0 00:21:37.179 15:06:56 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:37.179 15:06:56 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:37.179 15:06:56 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:37.179 15:06:56 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:37.179 15:06:56 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:37.179 15:06:56 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:37.179 nvmf_trace.0 00:21:37.438 15:06:56 -- common/autotest_common.sh@811 -- # return 0 00:21:37.438 15:06:56 -- fips/fips.sh@16 -- # killprocess 3324002 00:21:37.438 15:06:56 -- common/autotest_common.sh@926 -- # '[' -z 3324002 ']' 00:21:37.438 15:06:56 -- common/autotest_common.sh@930 -- # kill -0 3324002 00:21:37.438 15:06:56 -- common/autotest_common.sh@931 -- # uname 00:21:37.438 15:06:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.438 15:06:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3324002 00:21:37.438 15:06:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:37.438 15:06:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:37.438 15:06:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3324002' 00:21:37.438 killing process with pid 3324002 00:21:37.438 15:06:56 -- common/autotest_common.sh@945 -- # kill 3324002 00:21:37.438 Received shutdown signal, test time was about 10.000000 seconds 00:21:37.438 00:21:37.438 Latency(us) 00:21:37.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.438 =================================================================================================================== 00:21:37.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.438 15:06:56 -- common/autotest_common.sh@950 -- # wait 3324002 00:21:37.698 15:06:56 -- fips/fips.sh@17 -- # nvmftestfini 00:21:37.698 15:06:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:37.698 15:06:56 -- nvmf/common.sh@116 -- # sync 00:21:37.698 15:06:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:37.698 15:06:56 -- nvmf/common.sh@119 -- # set +e 00:21:37.698 15:06:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:37.698 15:06:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:37.698 rmmod nvme_tcp 00:21:37.698 rmmod nvme_fabrics 00:21:37.698 rmmod nvme_keyring 00:21:37.698 15:06:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:37.698 15:06:56 -- nvmf/common.sh@123 -- # set -e 00:21:37.698 15:06:56 -- nvmf/common.sh@124 -- # return 0 00:21:37.698 15:06:56 -- nvmf/common.sh@477 -- # '[' -n 3323783 ']' 00:21:37.698 15:06:56 -- nvmf/common.sh@478 -- # killprocess 3323783 00:21:37.698 15:06:56 -- common/autotest_common.sh@926 -- # '[' -z 3323783 ']' 00:21:37.698 15:06:56 -- common/autotest_common.sh@930 -- # kill -0 3323783 00:21:37.698 15:06:56 -- common/autotest_common.sh@931 -- # uname 00:21:37.698 15:06:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.698 15:06:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3323783 00:21:37.698 15:06:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:37.698 15:06:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:37.698 15:06:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3323783' 00:21:37.698 killing process with pid 3323783 00:21:37.698 15:06:56 -- common/autotest_common.sh@945 -- # kill 3323783 00:21:37.698 15:06:56 -- common/autotest_common.sh@950 -- # wait 3323783 00:21:37.958 15:06:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:37.958 15:06:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:37.958 15:06:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:37.958 15:06:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.958 15:06:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:37.958 15:06:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.958 15:06:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.958 15:06:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.496 15:06:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:40.496 15:06:58 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:40.496 00:21:40.496 real 0m22.076s 00:21:40.496 user 0m23.428s 00:21:40.496 sys 0m9.869s 00:21:40.496 15:06:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.496 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:21:40.496 ************************************ 00:21:40.496 END TEST nvmf_fips 00:21:40.496 ************************************ 00:21:40.496 15:06:58 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:40.496 15:06:58 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:40.496 15:06:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:40.496 15:06:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:40.496 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:21:40.496 ************************************ 00:21:40.496 START TEST nvmf_fuzz 00:21:40.496 ************************************ 00:21:40.496 15:06:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:40.496 * Looking for test storage... 00:21:40.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:40.496 15:06:58 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.496 15:06:58 -- nvmf/common.sh@7 -- # uname -s 00:21:40.496 15:06:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.496 15:06:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.496 15:06:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.496 15:06:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.496 15:06:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.496 15:06:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.496 15:06:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.496 15:06:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.496 15:06:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.496 15:06:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.496 15:06:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:40.496 15:06:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:40.496 15:06:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.496 15:06:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.496 15:06:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.496 15:06:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.496 15:06:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.496 15:06:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.496 15:06:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.496 15:06:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.496 15:06:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.496 15:06:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.496 15:06:58 -- paths/export.sh@5 -- # export PATH 00:21:40.496 15:06:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.496 15:06:58 -- nvmf/common.sh@46 -- # : 0 00:21:40.496 15:06:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:40.496 15:06:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:40.496 15:06:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:40.496 15:06:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.496 15:06:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.496 15:06:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:40.496 15:06:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:40.496 15:06:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:40.496 15:06:58 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:40.496 15:06:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:40.496 15:06:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.496 15:06:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:40.496 15:06:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:40.496 15:06:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:40.496 15:06:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.496 15:06:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.496 15:06:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.496 15:06:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:40.496 15:06:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:40.496 15:06:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:40.496 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:21:47.062 15:07:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:47.062 15:07:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:47.062 15:07:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:47.062 15:07:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:47.062 15:07:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:47.062 15:07:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:47.062 15:07:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:47.062 15:07:04 -- nvmf/common.sh@294 -- # net_devs=() 00:21:47.062 15:07:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:47.062 15:07:04 -- nvmf/common.sh@295 -- # e810=() 00:21:47.062 15:07:04 -- nvmf/common.sh@295 -- # local -ga e810 00:21:47.062 15:07:04 -- nvmf/common.sh@296 -- # x722=() 00:21:47.062 15:07:04 -- nvmf/common.sh@296 -- # local -ga x722 00:21:47.062 15:07:04 -- nvmf/common.sh@297 -- # mlx=() 00:21:47.062 15:07:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:47.062 15:07:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.062 15:07:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:47.062 15:07:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:47.062 15:07:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:47.062 15:07:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.062 15:07:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:47.062 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:47.062 15:07:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.062 15:07:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:47.062 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:47.062 15:07:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:47.062 15:07:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.062 15:07:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.062 15:07:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.062 15:07:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.062 15:07:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:47.062 Found net devices under 0000:af:00.0: cvl_0_0 00:21:47.062 15:07:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.062 15:07:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.062 15:07:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.062 15:07:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.062 15:07:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.062 15:07:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:47.062 Found net devices under 0000:af:00.1: cvl_0_1 00:21:47.062 15:07:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.062 15:07:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:47.062 15:07:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:47.062 15:07:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:47.062 15:07:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.062 15:07:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.062 15:07:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.062 15:07:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:47.062 15:07:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.062 15:07:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.062 15:07:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:47.062 15:07:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.062 15:07:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.062 15:07:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:47.062 15:07:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:47.062 15:07:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.062 15:07:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.062 15:07:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.062 15:07:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.062 15:07:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:47.062 15:07:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.062 15:07:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.062 15:07:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.062 15:07:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:47.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:21:47.062 00:21:47.062 --- 10.0.0.2 ping statistics --- 00:21:47.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.062 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:21:47.062 15:07:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:21:47.062 00:21:47.062 --- 10.0.0.1 ping statistics --- 00:21:47.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.062 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:21:47.062 15:07:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.062 15:07:04 -- nvmf/common.sh@410 -- # return 0 00:21:47.062 15:07:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:47.062 15:07:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.062 15:07:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:47.062 15:07:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.062 15:07:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:47.062 15:07:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:47.062 15:07:04 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3330154 00:21:47.062 15:07:04 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:47.062 15:07:04 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3330154 00:21:47.062 15:07:04 -- common/autotest_common.sh@819 -- # '[' -z 3330154 ']' 00:21:47.062 15:07:04 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:47.062 15:07:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.062 15:07:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:47.062 15:07:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.062 15:07:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:47.062 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:21:47.062 15:07:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.062 15:07:05 -- common/autotest_common.sh@852 -- # return 0 00:21:47.062 15:07:05 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.062 15:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.062 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:21:47.062 15:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.062 15:07:05 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:47.062 15:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.062 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:21:47.062 Malloc0 00:21:47.063 15:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.063 15:07:05 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.063 15:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.063 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:21:47.063 15:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.063 15:07:05 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:47.063 15:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.063 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:21:47.063 15:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.063 15:07:05 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.063 15:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.063 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:21:47.063 15:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.063 15:07:05 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:47.063 15:07:05 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:19.221 Fuzzing completed. Shutting down the fuzz application 00:22:19.221 00:22:19.221 Dumping successful admin opcodes: 00:22:19.221 8, 9, 10, 24, 00:22:19.221 Dumping successful io opcodes: 00:22:19.221 0, 9, 00:22:19.221 NS: 0x200003aeff00 I/O qp, Total commands completed: 634674, total successful commands: 3691, random_seed: 54237888 00:22:19.221 NS: 0x200003aeff00 admin qp, Total commands completed: 69048, total successful commands: 544, random_seed: 4170867968 00:22:19.221 15:07:36 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:19.221 Fuzzing completed. Shutting down the fuzz application 00:22:19.221 00:22:19.221 Dumping successful admin opcodes: 00:22:19.221 24, 00:22:19.222 Dumping successful io opcodes: 00:22:19.222 00:22:19.222 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 665652700 00:22:19.222 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 665770684 00:22:19.222 15:07:37 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.222 15:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.222 15:07:37 -- common/autotest_common.sh@10 -- # set +x 00:22:19.222 15:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.222 15:07:37 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:19.222 15:07:37 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:19.222 15:07:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:19.222 15:07:37 -- nvmf/common.sh@116 -- # sync 00:22:19.222 15:07:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:19.222 15:07:37 -- nvmf/common.sh@119 -- # set +e 00:22:19.222 15:07:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:19.222 15:07:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:19.222 rmmod nvme_tcp 00:22:19.222 rmmod nvme_fabrics 00:22:19.222 rmmod nvme_keyring 00:22:19.222 15:07:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:19.222 15:07:37 -- nvmf/common.sh@123 -- # set -e 00:22:19.222 15:07:37 -- nvmf/common.sh@124 -- # return 0 00:22:19.222 15:07:37 -- nvmf/common.sh@477 -- # '[' -n 3330154 ']' 00:22:19.222 15:07:37 -- nvmf/common.sh@478 -- # killprocess 3330154 00:22:19.222 15:07:37 -- common/autotest_common.sh@926 -- # '[' -z 3330154 ']' 00:22:19.222 15:07:37 -- common/autotest_common.sh@930 -- # kill -0 3330154 00:22:19.222 15:07:37 -- common/autotest_common.sh@931 -- # uname 00:22:19.222 15:07:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.222 15:07:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3330154 00:22:19.222 15:07:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:19.222 15:07:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:19.222 15:07:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3330154' 00:22:19.222 killing process with pid 3330154 00:22:19.222 15:07:37 -- common/autotest_common.sh@945 -- # kill 3330154 00:22:19.222 15:07:37 -- common/autotest_common.sh@950 -- # wait 3330154 00:22:19.222 15:07:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:19.222 15:07:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:19.222 15:07:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:19.222 15:07:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.222 15:07:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:19.222 15:07:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.222 15:07:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.222 15:07:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.756 15:07:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:21.756 15:07:40 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:21.756 00:22:21.756 real 0m41.328s 00:22:21.756 user 0m54.685s 00:22:21.756 sys 0m16.309s 00:22:21.756 15:07:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.756 15:07:40 -- common/autotest_common.sh@10 -- # set +x 00:22:21.756 ************************************ 00:22:21.756 END TEST nvmf_fuzz 00:22:21.756 ************************************ 00:22:21.756 15:07:40 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:21.756 15:07:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:21.756 15:07:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:21.757 15:07:40 -- common/autotest_common.sh@10 -- # set +x 00:22:21.757 ************************************ 00:22:21.757 START TEST nvmf_multiconnection 00:22:21.757 ************************************ 00:22:21.757 15:07:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:21.757 * Looking for test storage... 00:22:21.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:21.757 15:07:40 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.757 15:07:40 -- nvmf/common.sh@7 -- # uname -s 00:22:21.757 15:07:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.757 15:07:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.757 15:07:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.757 15:07:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.757 15:07:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.757 15:07:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.757 15:07:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.757 15:07:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.757 15:07:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.757 15:07:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.757 15:07:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:21.757 15:07:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:21.757 15:07:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.757 15:07:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.757 15:07:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.757 15:07:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.757 15:07:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.757 15:07:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.757 15:07:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.757 15:07:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.757 15:07:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.757 15:07:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.757 15:07:40 -- paths/export.sh@5 -- # export PATH 00:22:21.757 15:07:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.757 15:07:40 -- nvmf/common.sh@46 -- # : 0 00:22:21.757 15:07:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:21.757 15:07:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:21.757 15:07:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:21.757 15:07:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.757 15:07:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.757 15:07:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:21.757 15:07:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:21.757 15:07:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:21.757 15:07:40 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:21.757 15:07:40 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:21.757 15:07:40 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:21.757 15:07:40 -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:21.757 15:07:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:21.757 15:07:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.757 15:07:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:21.757 15:07:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:21.757 15:07:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:21.757 15:07:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.757 15:07:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.757 15:07:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.757 15:07:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:21.757 15:07:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:21.757 15:07:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:21.757 15:07:40 -- common/autotest_common.sh@10 -- # set +x 00:22:28.329 15:07:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:28.329 15:07:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:28.329 15:07:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:28.329 15:07:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:28.329 15:07:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:28.329 15:07:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:28.329 15:07:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:28.329 15:07:46 -- nvmf/common.sh@294 -- # net_devs=() 00:22:28.329 15:07:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:28.329 15:07:46 -- nvmf/common.sh@295 -- # e810=() 00:22:28.329 15:07:46 -- nvmf/common.sh@295 -- # local -ga e810 00:22:28.329 15:07:46 -- nvmf/common.sh@296 -- # x722=() 00:22:28.329 15:07:46 -- nvmf/common.sh@296 -- # local -ga x722 00:22:28.329 15:07:46 -- nvmf/common.sh@297 -- # mlx=() 00:22:28.329 15:07:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:28.329 15:07:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.329 15:07:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:28.329 15:07:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:28.329 15:07:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:28.329 15:07:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:28.329 15:07:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:28.329 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:28.329 15:07:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:28.329 15:07:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:28.329 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:28.329 15:07:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:28.329 15:07:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:28.329 15:07:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.329 15:07:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:28.329 15:07:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.329 15:07:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:28.329 Found net devices under 0000:af:00.0: cvl_0_0 00:22:28.329 15:07:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.329 15:07:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:28.329 15:07:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.329 15:07:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:28.329 15:07:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.329 15:07:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:28.329 Found net devices under 0000:af:00.1: cvl_0_1 00:22:28.329 15:07:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.329 15:07:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:28.329 15:07:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:28.329 15:07:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:28.329 15:07:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.329 15:07:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.329 15:07:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.329 15:07:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:28.329 15:07:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.329 15:07:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.329 15:07:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:28.329 15:07:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.329 15:07:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.329 15:07:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:28.329 15:07:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:28.329 15:07:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.329 15:07:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.329 15:07:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.329 15:07:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.329 15:07:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:28.329 15:07:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.329 15:07:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.329 15:07:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.329 15:07:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:28.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:22:28.329 00:22:28.329 --- 10.0.0.2 ping statistics --- 00:22:28.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.329 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:22:28.329 15:07:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:28.329 00:22:28.329 --- 10.0.0.1 ping statistics --- 00:22:28.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.329 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:28.329 15:07:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.329 15:07:46 -- nvmf/common.sh@410 -- # return 0 00:22:28.329 15:07:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:28.329 15:07:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.329 15:07:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:28.329 15:07:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.329 15:07:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:28.329 15:07:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:28.329 15:07:46 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:28.329 15:07:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:28.329 15:07:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:28.329 15:07:46 -- common/autotest_common.sh@10 -- # set +x 00:22:28.329 15:07:46 -- nvmf/common.sh@469 -- # nvmfpid=3340034 00:22:28.329 15:07:46 -- nvmf/common.sh@470 -- # waitforlisten 3340034 00:22:28.329 15:07:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.329 15:07:46 -- common/autotest_common.sh@819 -- # '[' -z 3340034 ']' 00:22:28.329 15:07:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.329 15:07:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.329 15:07:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.329 15:07:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.329 15:07:46 -- common/autotest_common.sh@10 -- # set +x 00:22:28.329 [2024-06-11 15:07:46.878874] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:28.329 [2024-06-11 15:07:46.878930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.329 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.329 [2024-06-11 15:07:46.976818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.329 [2024-06-11 15:07:47.064401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:28.329 [2024-06-11 15:07:47.064549] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.330 [2024-06-11 15:07:47.064560] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.330 [2024-06-11 15:07:47.064570] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.330 [2024-06-11 15:07:47.064674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.330 [2024-06-11 15:07:47.064788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.330 [2024-06-11 15:07:47.064907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.330 [2024-06-11 15:07:47.064907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.898 15:07:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:28.898 15:07:47 -- common/autotest_common.sh@852 -- # return 0 00:22:28.898 15:07:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:28.898 15:07:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:28.898 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:28.898 15:07:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.898 15:07:47 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.898 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.898 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:28.898 [2024-06-11 15:07:47.682113] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.898 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.898 15:07:47 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:28.898 15:07:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.898 15:07:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:28.898 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.898 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:28.898 Malloc1 00:22:28.898 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.898 15:07:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:28.898 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.898 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:28.898 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.898 15:07:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:28.898 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.899 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:28.899 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.899 15:07:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.899 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.899 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 [2024-06-11 15:07:47.742091] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.158 15:07:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 Malloc2 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.158 15:07:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 Malloc3 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.158 15:07:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 Malloc4 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:29.158 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.158 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.158 15:07:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.158 15:07:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:29.159 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.159 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.159 Malloc5 00:22:29.159 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.159 15:07:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:29.159 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.159 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.159 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.159 15:07:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:29.159 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.159 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.159 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.159 15:07:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:29.159 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.159 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.159 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.159 15:07:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.159 15:07:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:29.159 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.159 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.159 Malloc6 00:22:29.159 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.159 15:07:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:29.159 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.159 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.159 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.159 15:07:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:29.159 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.159 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.159 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.159 15:07:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:29.159 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.159 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.159 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.159 15:07:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.159 15:07:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:29.159 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.159 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.159 Malloc7 00:22:29.159 15:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.159 15:07:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:29.159 15:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.159 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:22:29.418 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.418 15:07:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:29.418 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.418 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.418 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.418 15:07:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:29.418 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.418 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.418 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.418 15:07:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.418 15:07:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:29.418 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.418 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.418 Malloc8 00:22:29.418 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.418 15:07:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:29.418 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.418 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.418 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.418 15:07:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:29.418 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.418 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.418 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.418 15:07:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:29.418 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.418 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.418 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.418 15:07:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.418 15:07:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:29.418 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.418 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.418 Malloc9 00:22:29.418 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.418 15:07:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:29.418 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.418 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.418 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.418 15:07:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:29.418 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.418 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.419 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.419 15:07:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:29.419 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.419 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.419 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.419 15:07:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.419 15:07:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:29.419 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.419 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.419 Malloc10 00:22:29.419 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.419 15:07:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:29.419 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.419 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.419 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.419 15:07:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:29.419 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.419 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.419 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.419 15:07:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:29.419 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.419 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.419 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.419 15:07:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.419 15:07:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:29.419 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.419 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.419 Malloc11 00:22:29.419 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.419 15:07:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:29.419 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.419 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.419 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.419 15:07:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:29.419 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.419 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.419 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.419 15:07:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:29.419 15:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.419 15:07:48 -- common/autotest_common.sh@10 -- # set +x 00:22:29.419 15:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.419 15:07:48 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:29.419 15:07:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.419 15:07:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:30.797 15:07:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:30.797 15:07:49 -- common/autotest_common.sh@1177 -- # local i=0 00:22:30.797 15:07:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:30.797 15:07:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:30.797 15:07:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:32.700 15:07:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:32.700 15:07:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:32.700 15:07:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:32.700 15:07:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:32.700 15:07:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:32.700 15:07:51 -- common/autotest_common.sh@1187 -- # return 0 00:22:32.700 15:07:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.700 15:07:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:34.604 15:07:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:34.604 15:07:52 -- common/autotest_common.sh@1177 -- # local i=0 00:22:34.604 15:07:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:34.604 15:07:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:34.604 15:07:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:36.507 15:07:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:36.507 15:07:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:36.507 15:07:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:36.507 15:07:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:36.507 15:07:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:36.507 15:07:54 -- common/autotest_common.sh@1187 -- # return 0 00:22:36.507 15:07:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.507 15:07:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:37.443 15:07:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:37.443 15:07:56 -- common/autotest_common.sh@1177 -- # local i=0 00:22:37.443 15:07:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:37.443 15:07:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:37.443 15:07:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:39.977 15:07:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:39.977 15:07:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:39.977 15:07:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:39.977 15:07:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:39.977 15:07:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:39.977 15:07:58 -- common/autotest_common.sh@1187 -- # return 0 00:22:39.977 15:07:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.977 15:07:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:40.913 15:07:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:40.913 15:07:59 -- common/autotest_common.sh@1177 -- # local i=0 00:22:40.913 15:07:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:40.913 15:07:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:40.913 15:07:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:43.450 15:08:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:43.450 15:08:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:43.450 15:08:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:43.450 15:08:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:43.450 15:08:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:43.450 15:08:01 -- common/autotest_common.sh@1187 -- # return 0 00:22:43.450 15:08:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.450 15:08:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:44.388 15:08:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:44.388 15:08:03 -- common/autotest_common.sh@1177 -- # local i=0 00:22:44.388 15:08:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:44.388 15:08:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:44.388 15:08:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:46.292 15:08:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:46.292 15:08:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:46.292 15:08:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:46.292 15:08:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:46.292 15:08:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:46.292 15:08:05 -- common/autotest_common.sh@1187 -- # return 0 00:22:46.292 15:08:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:46.292 15:08:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:48.197 15:08:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:48.197 15:08:06 -- common/autotest_common.sh@1177 -- # local i=0 00:22:48.197 15:08:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:48.197 15:08:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:48.197 15:08:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:50.103 15:08:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:50.103 15:08:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:50.103 15:08:08 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:50.103 15:08:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:50.103 15:08:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:50.103 15:08:08 -- common/autotest_common.sh@1187 -- # return 0 00:22:50.103 15:08:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:50.103 15:08:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:51.478 15:08:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:51.478 15:08:10 -- common/autotest_common.sh@1177 -- # local i=0 00:22:51.478 15:08:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:51.478 15:08:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:51.478 15:08:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:53.444 15:08:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:53.445 15:08:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:53.445 15:08:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:53.445 15:08:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:53.445 15:08:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:53.445 15:08:12 -- common/autotest_common.sh@1187 -- # return 0 00:22:53.445 15:08:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.445 15:08:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:54.825 15:08:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:54.825 15:08:13 -- common/autotest_common.sh@1177 -- # local i=0 00:22:54.825 15:08:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:54.825 15:08:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:54.825 15:08:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:57.361 15:08:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:57.361 15:08:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:57.361 15:08:15 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:57.361 15:08:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:57.361 15:08:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:57.361 15:08:15 -- common/autotest_common.sh@1187 -- # return 0 00:22:57.361 15:08:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:57.361 15:08:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:58.737 15:08:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:58.737 15:08:17 -- common/autotest_common.sh@1177 -- # local i=0 00:22:58.737 15:08:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:58.737 15:08:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:58.737 15:08:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:00.641 15:08:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:00.641 15:08:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:00.641 15:08:19 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:23:00.641 15:08:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:00.641 15:08:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:00.641 15:08:19 -- common/autotest_common.sh@1187 -- # return 0 00:23:00.641 15:08:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:00.642 15:08:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:02.021 15:08:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:02.021 15:08:20 -- common/autotest_common.sh@1177 -- # local i=0 00:23:02.021 15:08:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:02.021 15:08:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:02.021 15:08:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:03.931 15:08:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:03.931 15:08:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:03.931 15:08:22 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:23:04.189 15:08:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:04.189 15:08:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:04.189 15:08:22 -- common/autotest_common.sh@1187 -- # return 0 00:23:04.189 15:08:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:04.189 15:08:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:06.091 15:08:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:06.091 15:08:24 -- common/autotest_common.sh@1177 -- # local i=0 00:23:06.091 15:08:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:06.091 15:08:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:06.091 15:08:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:08.025 15:08:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:08.025 15:08:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:08.025 15:08:26 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:23:08.025 15:08:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:08.025 15:08:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:08.025 15:08:26 -- common/autotest_common.sh@1187 -- # return 0 00:23:08.025 15:08:26 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:08.025 [global] 00:23:08.025 thread=1 00:23:08.025 invalidate=1 00:23:08.025 rw=read 00:23:08.025 time_based=1 00:23:08.025 runtime=10 00:23:08.025 ioengine=libaio 00:23:08.025 direct=1 00:23:08.025 bs=262144 00:23:08.025 iodepth=64 00:23:08.025 norandommap=1 00:23:08.025 numjobs=1 00:23:08.025 00:23:08.025 [job0] 00:23:08.025 filename=/dev/nvme0n1 00:23:08.025 [job1] 00:23:08.025 filename=/dev/nvme10n1 00:23:08.025 [job2] 00:23:08.025 filename=/dev/nvme1n1 00:23:08.025 [job3] 00:23:08.025 filename=/dev/nvme2n1 00:23:08.025 [job4] 00:23:08.025 filename=/dev/nvme3n1 00:23:08.025 [job5] 00:23:08.025 filename=/dev/nvme4n1 00:23:08.025 [job6] 00:23:08.025 filename=/dev/nvme5n1 00:23:08.025 [job7] 00:23:08.025 filename=/dev/nvme6n1 00:23:08.025 [job8] 00:23:08.025 filename=/dev/nvme7n1 00:23:08.025 [job9] 00:23:08.025 filename=/dev/nvme8n1 00:23:08.025 [job10] 00:23:08.025 filename=/dev/nvme9n1 00:23:08.025 Could not set queue depth (nvme0n1) 00:23:08.025 Could not set queue depth (nvme10n1) 00:23:08.025 Could not set queue depth (nvme1n1) 00:23:08.025 Could not set queue depth (nvme2n1) 00:23:08.025 Could not set queue depth (nvme3n1) 00:23:08.025 Could not set queue depth (nvme4n1) 00:23:08.025 Could not set queue depth (nvme5n1) 00:23:08.025 Could not set queue depth (nvme6n1) 00:23:08.025 Could not set queue depth (nvme7n1) 00:23:08.025 Could not set queue depth (nvme8n1) 00:23:08.025 Could not set queue depth (nvme9n1) 00:23:08.288 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:08.288 fio-3.35 00:23:08.288 Starting 11 threads 00:23:20.485 00:23:20.485 job0: (groupid=0, jobs=1): err= 0: pid=3347673: Tue Jun 11 15:08:37 2024 00:23:20.485 read: IOPS=627, BW=157MiB/s (165MB/s)(1578MiB/10051msec) 00:23:20.486 slat (usec): min=10, max=109086, avg=1394.61, stdev=4787.60 00:23:20.486 clat (msec): min=5, max=270, avg=100.40, stdev=35.73 00:23:20.486 lat (msec): min=6, max=270, avg=101.80, stdev=36.35 00:23:20.486 clat percentiles (msec): 00:23:20.486 | 1.00th=[ 13], 5.00th=[ 31], 10.00th=[ 54], 20.00th=[ 69], 00:23:20.486 | 30.00th=[ 86], 40.00th=[ 97], 50.00th=[ 106], 60.00th=[ 113], 00:23:20.486 | 70.00th=[ 122], 80.00th=[ 129], 90.00th=[ 142], 95.00th=[ 153], 00:23:20.486 | 99.00th=[ 174], 99.50th=[ 203], 99.90th=[ 224], 99.95th=[ 253], 00:23:20.486 | 99.99th=[ 271] 00:23:20.486 bw ( KiB/s): min=113664, max=256512, per=8.20%, avg=159924.30, stdev=39526.72, samples=20 00:23:20.486 iops : min= 444, max= 1002, avg=624.70, stdev=154.39, samples=20 00:23:20.486 lat (msec) : 10=0.52%, 20=2.61%, 50=5.67%, 100=34.32%, 250=56.81% 00:23:20.486 lat (msec) : 500=0.06% 00:23:20.486 cpu : usr=0.30%, sys=2.43%, ctx=1467, majf=0, minf=3221 00:23:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.486 issued rwts: total=6311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.486 job1: (groupid=0, jobs=1): err= 0: pid=3347674: Tue Jun 11 15:08:37 2024 00:23:20.486 read: IOPS=577, BW=144MiB/s (151MB/s)(1455MiB/10075msec) 00:23:20.486 slat (usec): min=13, max=83556, avg=1104.24, stdev=4035.84 00:23:20.486 clat (msec): min=5, max=228, avg=109.55, stdev=33.70 00:23:20.486 lat (msec): min=5, max=228, avg=110.65, stdev=34.24 00:23:20.486 clat percentiles (msec): 00:23:20.486 | 1.00th=[ 26], 5.00th=[ 54], 10.00th=[ 68], 20.00th=[ 81], 00:23:20.486 | 30.00th=[ 93], 40.00th=[ 102], 50.00th=[ 111], 60.00th=[ 121], 00:23:20.486 | 70.00th=[ 128], 80.00th=[ 138], 90.00th=[ 153], 95.00th=[ 165], 00:23:20.486 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 203], 99.95th=[ 215], 00:23:20.486 | 99.99th=[ 230] 00:23:20.486 bw ( KiB/s): min=94720, max=205312, per=7.56%, avg=147347.45, stdev=30967.18, samples=20 00:23:20.486 iops : min= 370, max= 802, avg=575.50, stdev=121.01, samples=20 00:23:20.486 lat (msec) : 10=0.02%, 20=0.40%, 50=3.69%, 100=34.26%, 250=61.63% 00:23:20.486 cpu : usr=0.24%, sys=2.12%, ctx=1683, majf=0, minf=4097 00:23:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.486 issued rwts: total=5820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.486 job2: (groupid=0, jobs=1): err= 0: pid=3347675: Tue Jun 11 15:08:37 2024 00:23:20.486 read: IOPS=731, BW=183MiB/s (192MB/s)(1831MiB/10014msec) 00:23:20.486 slat (usec): min=10, max=73141, avg=1198.57, stdev=4033.62 00:23:20.486 clat (msec): min=3, max=226, avg=86.20, stdev=43.80 00:23:20.486 lat (msec): min=3, max=226, avg=87.40, stdev=44.46 00:23:20.486 clat percentiles (msec): 00:23:20.486 | 1.00th=[ 13], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 37], 00:23:20.486 | 30.00th=[ 44], 40.00th=[ 69], 50.00th=[ 90], 60.00th=[ 106], 00:23:20.486 | 70.00th=[ 120], 80.00th=[ 129], 90.00th=[ 144], 95.00th=[ 153], 00:23:20.486 | 99.00th=[ 171], 99.50th=[ 176], 99.90th=[ 201], 99.95th=[ 215], 00:23:20.486 | 99.99th=[ 228] 00:23:20.486 bw ( KiB/s): min=101888, max=449660, per=9.53%, avg=185836.60, stdev=95368.12, samples=20 00:23:20.486 iops : min= 398, max= 1756, avg=725.90, stdev=372.46, samples=20 00:23:20.486 lat (msec) : 4=0.01%, 10=0.49%, 20=1.88%, 50=30.64%, 100=23.55% 00:23:20.486 lat (msec) : 250=43.42% 00:23:20.486 cpu : usr=0.25%, sys=2.56%, ctx=1639, majf=0, minf=4097 00:23:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.486 issued rwts: total=7324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.486 job3: (groupid=0, jobs=1): err= 0: pid=3347676: Tue Jun 11 15:08:37 2024 00:23:20.486 read: IOPS=589, BW=147MiB/s (155MB/s)(1489MiB/10096msec) 00:23:20.486 slat (usec): min=12, max=119052, avg=1319.60, stdev=4296.77 00:23:20.486 clat (msec): min=2, max=214, avg=107.06, stdev=36.73 00:23:20.486 lat (msec): min=2, max=230, avg=108.38, stdev=37.28 00:23:20.486 clat percentiles (msec): 00:23:20.486 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 59], 20.00th=[ 79], 00:23:20.486 | 30.00th=[ 92], 40.00th=[ 102], 50.00th=[ 112], 60.00th=[ 121], 00:23:20.486 | 70.00th=[ 128], 80.00th=[ 136], 90.00th=[ 148], 95.00th=[ 159], 00:23:20.486 | 99.00th=[ 201], 99.50th=[ 209], 99.90th=[ 213], 99.95th=[ 213], 00:23:20.486 | 99.99th=[ 215] 00:23:20.486 bw ( KiB/s): min=97792, max=239616, per=7.73%, avg=150770.85, stdev=37671.03, samples=20 00:23:20.486 iops : min= 382, max= 936, avg=588.90, stdev=147.18, samples=20 00:23:20.486 lat (msec) : 4=0.10%, 10=0.99%, 20=1.80%, 50=4.89%, 100=30.28% 00:23:20.486 lat (msec) : 250=61.94% 00:23:20.486 cpu : usr=0.25%, sys=1.84%, ctx=1525, majf=0, minf=4097 00:23:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.486 issued rwts: total=5954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.486 job4: (groupid=0, jobs=1): err= 0: pid=3347677: Tue Jun 11 15:08:37 2024 00:23:20.486 read: IOPS=738, BW=185MiB/s (194MB/s)(1860MiB/10079msec) 00:23:20.486 slat (usec): min=14, max=66166, avg=1220.08, stdev=3430.99 00:23:20.486 clat (msec): min=7, max=184, avg=85.34, stdev=30.81 00:23:20.486 lat (msec): min=8, max=199, avg=86.56, stdev=31.23 00:23:20.486 clat percentiles (msec): 00:23:20.486 | 1.00th=[ 25], 5.00th=[ 43], 10.00th=[ 52], 20.00th=[ 60], 00:23:20.486 | 30.00th=[ 66], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 89], 00:23:20.486 | 70.00th=[ 96], 80.00th=[ 111], 90.00th=[ 132], 95.00th=[ 148], 00:23:20.486 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 171], 99.95th=[ 174], 00:23:20.486 | 99.99th=[ 184] 00:23:20.486 bw ( KiB/s): min=114176, max=280064, per=9.69%, avg=188860.75, stdev=50206.38, samples=20 00:23:20.486 iops : min= 446, max= 1094, avg=737.70, stdev=196.14, samples=20 00:23:20.486 lat (msec) : 10=0.11%, 20=0.50%, 50=8.22%, 100=65.54%, 250=25.63% 00:23:20.486 cpu : usr=0.21%, sys=3.21%, ctx=1669, majf=0, minf=4097 00:23:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.486 issued rwts: total=7441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.486 job5: (groupid=0, jobs=1): err= 0: pid=3347678: Tue Jun 11 15:08:37 2024 00:23:20.486 read: IOPS=790, BW=198MiB/s (207MB/s)(1996MiB/10101msec) 00:23:20.486 slat (usec): min=11, max=85008, avg=929.37, stdev=3037.56 00:23:20.486 clat (usec): min=1153, max=219031, avg=79918.01, stdev=34241.24 00:23:20.486 lat (usec): min=1181, max=268498, avg=80847.38, stdev=34520.23 00:23:20.486 clat percentiles (msec): 00:23:20.486 | 1.00th=[ 5], 5.00th=[ 21], 10.00th=[ 38], 20.00th=[ 54], 00:23:20.486 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 87], 00:23:20.486 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 122], 95.00th=[ 134], 00:23:20.486 | 99.00th=[ 182], 99.50th=[ 197], 99.90th=[ 211], 99.95th=[ 215], 00:23:20.486 | 99.99th=[ 220] 00:23:20.486 bw ( KiB/s): min=132608, max=342528, per=10.40%, avg=202789.30, stdev=57471.00, samples=20 00:23:20.486 iops : min= 518, max= 1338, avg=792.10, stdev=224.55, samples=20 00:23:20.486 lat (msec) : 2=0.05%, 4=0.33%, 10=2.09%, 20=2.47%, 50=12.91% 00:23:20.486 lat (msec) : 100=57.23%, 250=24.92% 00:23:20.486 cpu : usr=0.26%, sys=3.05%, ctx=1947, majf=0, minf=4097 00:23:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.486 issued rwts: total=7985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.486 job6: (groupid=0, jobs=1): err= 0: pid=3347679: Tue Jun 11 15:08:37 2024 00:23:20.486 read: IOPS=612, BW=153MiB/s (161MB/s)(1539MiB/10048msec) 00:23:20.486 slat (usec): min=11, max=91613, avg=1421.55, stdev=4346.59 00:23:20.486 clat (usec): min=1343, max=215477, avg=102936.75, stdev=43053.80 00:23:20.486 lat (usec): min=1369, max=215544, avg=104358.30, stdev=43715.26 00:23:20.486 clat percentiles (msec): 00:23:20.486 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 25], 20.00th=[ 67], 00:23:20.486 | 30.00th=[ 93], 40.00th=[ 104], 50.00th=[ 113], 60.00th=[ 122], 00:23:20.486 | 70.00th=[ 129], 80.00th=[ 138], 90.00th=[ 148], 95.00th=[ 161], 00:23:20.486 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 209], 99.95th=[ 211], 00:23:20.486 | 99.99th=[ 215] 00:23:20.486 bw ( KiB/s): min=95744, max=259072, per=8.00%, avg=155980.80, stdev=46194.02, samples=20 00:23:20.486 iops : min= 374, max= 1012, avg=609.30, stdev=180.45, samples=20 00:23:20.486 lat (msec) : 2=0.10%, 4=0.42%, 10=3.26%, 20=5.20%, 50=5.86% 00:23:20.486 lat (msec) : 100=21.86%, 250=63.29% 00:23:20.486 cpu : usr=0.22%, sys=2.52%, ctx=1514, majf=0, minf=4097 00:23:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.486 issued rwts: total=6157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.486 job7: (groupid=0, jobs=1): err= 0: pid=3347680: Tue Jun 11 15:08:37 2024 00:23:20.486 read: IOPS=738, BW=185MiB/s (194MB/s)(1866MiB/10102msec) 00:23:20.486 slat (usec): min=11, max=88308, avg=1068.55, stdev=3488.83 00:23:20.486 clat (msec): min=2, max=220, avg=85.44, stdev=36.66 00:23:20.486 lat (msec): min=2, max=220, avg=86.51, stdev=37.11 00:23:20.486 clat percentiles (msec): 00:23:20.486 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 55], 00:23:20.486 | 30.00th=[ 67], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 95], 00:23:20.486 | 70.00th=[ 106], 80.00th=[ 118], 90.00th=[ 130], 95.00th=[ 142], 00:23:20.486 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 218], 99.95th=[ 218], 00:23:20.486 | 99.99th=[ 222] 00:23:20.486 bw ( KiB/s): min=115200, max=356864, per=9.72%, avg=189402.20, stdev=61520.67, samples=20 00:23:20.486 iops : min= 450, max= 1394, avg=739.85, stdev=240.32, samples=20 00:23:20.486 lat (msec) : 4=0.12%, 10=1.27%, 20=2.97%, 50=13.19%, 100=46.94% 00:23:20.486 lat (msec) : 250=35.51% 00:23:20.486 cpu : usr=0.21%, sys=2.67%, ctx=1774, majf=0, minf=4097 00:23:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.486 issued rwts: total=7463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.486 job8: (groupid=0, jobs=1): err= 0: pid=3347682: Tue Jun 11 15:08:37 2024 00:23:20.486 read: IOPS=649, BW=162MiB/s (170MB/s)(1636MiB/10073msec) 00:23:20.486 slat (usec): min=11, max=131556, avg=1134.53, stdev=4089.46 00:23:20.486 clat (usec): min=1455, max=238807, avg=97264.90, stdev=39248.04 00:23:20.486 lat (usec): min=1484, max=282590, avg=98399.44, stdev=39672.66 00:23:20.486 clat percentiles (msec): 00:23:20.486 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 69], 00:23:20.486 | 30.00th=[ 79], 40.00th=[ 87], 50.00th=[ 95], 60.00th=[ 106], 00:23:20.486 | 70.00th=[ 117], 80.00th=[ 129], 90.00th=[ 146], 95.00th=[ 163], 00:23:20.486 | 99.00th=[ 205], 99.50th=[ 218], 99.90th=[ 222], 99.95th=[ 224], 00:23:20.486 | 99.99th=[ 239] 00:23:20.486 bw ( KiB/s): min=112128, max=234496, per=8.51%, avg=165872.90, stdev=31168.66, samples=20 00:23:20.486 iops : min= 438, max= 916, avg=647.90, stdev=121.77, samples=20 00:23:20.486 lat (msec) : 2=0.06%, 4=0.49%, 10=0.84%, 20=1.39%, 50=8.04% 00:23:20.486 lat (msec) : 100=44.78%, 250=44.40% 00:23:20.486 cpu : usr=0.31%, sys=2.07%, ctx=1628, majf=0, minf=4097 00:23:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.486 issued rwts: total=6543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.486 job9: (groupid=0, jobs=1): err= 0: pid=3347683: Tue Jun 11 15:08:37 2024 00:23:20.486 read: IOPS=649, BW=162MiB/s (170MB/s)(1634MiB/10053msec) 00:23:20.486 slat (usec): min=8, max=68912, avg=917.82, stdev=3551.71 00:23:20.486 clat (usec): min=1676, max=214229, avg=97412.64, stdev=44644.27 00:23:20.486 lat (usec): min=1720, max=223900, avg=98330.46, stdev=45147.80 00:23:20.486 clat percentiles (msec): 00:23:20.486 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 58], 00:23:20.486 | 30.00th=[ 84], 40.00th=[ 97], 50.00th=[ 108], 60.00th=[ 118], 00:23:20.486 | 70.00th=[ 127], 80.00th=[ 134], 90.00th=[ 146], 95.00th=[ 155], 00:23:20.486 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 197], 99.95th=[ 197], 00:23:20.486 | 99.99th=[ 215] 00:23:20.486 bw ( KiB/s): min=99328, max=331264, per=8.50%, avg=165653.20, stdev=52510.67, samples=20 00:23:20.486 iops : min= 388, max= 1294, avg=647.05, stdev=205.11, samples=20 00:23:20.486 lat (msec) : 2=0.03%, 4=0.73%, 10=2.75%, 20=6.60%, 50=8.85% 00:23:20.486 lat (msec) : 100=24.09%, 250=56.95% 00:23:20.486 cpu : usr=0.20%, sys=2.42%, ctx=1939, majf=0, minf=4097 00:23:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.486 issued rwts: total=6534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.486 job10: (groupid=0, jobs=1): err= 0: pid=3347684: Tue Jun 11 15:08:37 2024 00:23:20.486 read: IOPS=930, BW=233MiB/s (244MB/s)(2348MiB/10094msec) 00:23:20.486 slat (usec): min=13, max=112129, avg=895.02, stdev=3082.77 00:23:20.486 clat (msec): min=2, max=211, avg=67.84, stdev=40.94 00:23:20.487 lat (msec): min=2, max=215, avg=68.73, stdev=41.48 00:23:20.487 clat percentiles (msec): 00:23:20.487 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 30], 20.00th=[ 34], 00:23:20.487 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 46], 60.00th=[ 81], 00:23:20.487 | 70.00th=[ 96], 80.00th=[ 110], 90.00th=[ 124], 95.00th=[ 133], 00:23:20.487 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 205], 99.95th=[ 209], 00:23:20.487 | 99.99th=[ 213] 00:23:20.487 bw ( KiB/s): min=115712, max=460800, per=12.25%, avg=238745.60, stdev=116800.29, samples=20 00:23:20.487 iops : min= 452, max= 1800, avg=932.60, stdev=456.25, samples=20 00:23:20.487 lat (msec) : 4=0.03%, 10=0.85%, 20=2.75%, 50=47.34%, 100=22.12% 00:23:20.487 lat (msec) : 250=26.91% 00:23:20.487 cpu : usr=0.46%, sys=3.51%, ctx=2105, majf=0, minf=4097 00:23:20.487 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:20.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.487 issued rwts: total=9390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.487 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.487 00:23:20.487 Run status group 0 (all jobs): 00:23:20.487 READ: bw=1904MiB/s (1996MB/s), 144MiB/s-233MiB/s (151MB/s-244MB/s), io=18.8GiB (20.2GB), run=10014-10102msec 00:23:20.487 00:23:20.487 Disk stats (read/write): 00:23:20.487 nvme0n1: ios=12093/0, merge=0/0, ticks=1216233/0, in_queue=1216233, util=96.03% 00:23:20.487 nvme10n1: ios=11584/0, merge=0/0, ticks=1251874/0, in_queue=1251874, util=96.40% 00:23:20.487 nvme1n1: ios=13863/0, merge=0/0, ticks=1218744/0, in_queue=1218744, util=96.74% 00:23:20.487 nvme2n1: ios=11845/0, merge=0/0, ticks=1244932/0, in_queue=1244932, util=97.05% 00:23:20.487 nvme3n1: ios=14811/0, merge=0/0, ticks=1246031/0, in_queue=1246031, util=97.20% 00:23:20.487 nvme4n1: ios=15897/0, merge=0/0, ticks=1249171/0, in_queue=1249171, util=97.68% 00:23:20.487 nvme5n1: ios=11832/0, merge=0/0, ticks=1215929/0, in_queue=1215929, util=97.84% 00:23:20.487 nvme6n1: ios=14849/0, merge=0/0, ticks=1246776/0, in_queue=1246776, util=98.12% 00:23:20.487 nvme7n1: ios=13035/0, merge=0/0, ticks=1251912/0, in_queue=1251912, util=98.73% 00:23:20.487 nvme8n1: ios=12668/0, merge=0/0, ticks=1224513/0, in_queue=1224513, util=98.98% 00:23:20.487 nvme9n1: ios=18700/0, merge=0/0, ticks=1246060/0, in_queue=1246060, util=99.20% 00:23:20.487 15:08:37 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:20.487 [global] 00:23:20.487 thread=1 00:23:20.487 invalidate=1 00:23:20.487 rw=randwrite 00:23:20.487 time_based=1 00:23:20.487 runtime=10 00:23:20.487 ioengine=libaio 00:23:20.487 direct=1 00:23:20.487 bs=262144 00:23:20.487 iodepth=64 00:23:20.487 norandommap=1 00:23:20.487 numjobs=1 00:23:20.487 00:23:20.487 [job0] 00:23:20.487 filename=/dev/nvme0n1 00:23:20.487 [job1] 00:23:20.487 filename=/dev/nvme10n1 00:23:20.487 [job2] 00:23:20.487 filename=/dev/nvme1n1 00:23:20.487 [job3] 00:23:20.487 filename=/dev/nvme2n1 00:23:20.487 [job4] 00:23:20.487 filename=/dev/nvme3n1 00:23:20.487 [job5] 00:23:20.487 filename=/dev/nvme4n1 00:23:20.487 [job6] 00:23:20.487 filename=/dev/nvme5n1 00:23:20.487 [job7] 00:23:20.487 filename=/dev/nvme6n1 00:23:20.487 [job8] 00:23:20.487 filename=/dev/nvme7n1 00:23:20.487 [job9] 00:23:20.487 filename=/dev/nvme8n1 00:23:20.487 [job10] 00:23:20.487 filename=/dev/nvme9n1 00:23:20.487 Could not set queue depth (nvme0n1) 00:23:20.487 Could not set queue depth (nvme10n1) 00:23:20.487 Could not set queue depth (nvme1n1) 00:23:20.487 Could not set queue depth (nvme2n1) 00:23:20.487 Could not set queue depth (nvme3n1) 00:23:20.487 Could not set queue depth (nvme4n1) 00:23:20.487 Could not set queue depth (nvme5n1) 00:23:20.487 Could not set queue depth (nvme6n1) 00:23:20.487 Could not set queue depth (nvme7n1) 00:23:20.487 Could not set queue depth (nvme8n1) 00:23:20.487 Could not set queue depth (nvme9n1) 00:23:20.487 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:20.487 fio-3.35 00:23:20.487 Starting 11 threads 00:23:30.459 00:23:30.459 job0: (groupid=0, jobs=1): err= 0: pid=3349380: Tue Jun 11 15:08:49 2024 00:23:30.459 write: IOPS=507, BW=127MiB/s (133MB/s)(1277MiB/10069msec); 0 zone resets 00:23:30.459 slat (usec): min=19, max=76563, avg=1722.60, stdev=4178.12 00:23:30.459 clat (msec): min=2, max=284, avg=124.40, stdev=69.98 00:23:30.459 lat (msec): min=2, max=284, avg=126.12, stdev=70.91 00:23:30.459 clat percentiles (msec): 00:23:30.459 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 52], 20.00th=[ 68], 00:23:30.459 | 30.00th=[ 75], 40.00th=[ 78], 50.00th=[ 100], 60.00th=[ 161], 00:23:30.459 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 232], 95.00th=[ 249], 00:23:30.459 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 284], 00:23:30.459 | 99.99th=[ 284] 00:23:30.459 bw ( KiB/s): min=63488, max=290304, per=9.78%, avg=129159.15, stdev=72090.84, samples=20 00:23:30.459 iops : min= 248, max= 1134, avg=504.50, stdev=281.63, samples=20 00:23:30.459 lat (msec) : 4=0.08%, 10=1.21%, 20=2.13%, 50=5.11%, 100=41.66% 00:23:30.459 lat (msec) : 250=45.48%, 500=4.33% 00:23:30.460 cpu : usr=1.13%, sys=1.41%, ctx=2018, majf=0, minf=1 00:23:30.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:30.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.460 issued rwts: total=0,5108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.460 job1: (groupid=0, jobs=1): err= 0: pid=3349419: Tue Jun 11 15:08:49 2024 00:23:30.460 write: IOPS=356, BW=89.1MiB/s (93.4MB/s)(909MiB/10206msec); 0 zone resets 00:23:30.460 slat (usec): min=20, max=156533, avg=2581.07, stdev=6038.46 00:23:30.460 clat (msec): min=12, max=464, avg=176.91, stdev=51.70 00:23:30.460 lat (msec): min=12, max=464, avg=179.50, stdev=52.22 00:23:30.460 clat percentiles (msec): 00:23:30.460 | 1.00th=[ 35], 5.00th=[ 81], 10.00th=[ 104], 20.00th=[ 150], 00:23:30.460 | 30.00th=[ 165], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 188], 00:23:30.460 | 70.00th=[ 201], 80.00th=[ 213], 90.00th=[ 230], 95.00th=[ 249], 00:23:30.460 | 99.00th=[ 296], 99.50th=[ 384], 99.90th=[ 451], 99.95th=[ 464], 00:23:30.460 | 99.99th=[ 464] 00:23:30.460 bw ( KiB/s): min=61952, max=161280, per=6.92%, avg=91456.15, stdev=21666.11, samples=20 00:23:30.460 iops : min= 242, max= 630, avg=357.25, stdev=84.63, samples=20 00:23:30.460 lat (msec) : 20=0.19%, 50=1.65%, 100=6.41%, 250=87.18%, 500=4.57% 00:23:30.460 cpu : usr=0.82%, sys=0.88%, ctx=1105, majf=0, minf=1 00:23:30.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:23:30.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.460 issued rwts: total=0,3636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.460 job2: (groupid=0, jobs=1): err= 0: pid=3349422: Tue Jun 11 15:08:49 2024 00:23:30.460 write: IOPS=507, BW=127MiB/s (133MB/s)(1294MiB/10205msec); 0 zone resets 00:23:30.460 slat (usec): min=29, max=95467, avg=1776.09, stdev=3857.80 00:23:30.460 clat (msec): min=7, max=425, avg=124.32, stdev=60.42 00:23:30.460 lat (msec): min=9, max=425, avg=126.10, stdev=61.15 00:23:30.460 clat percentiles (msec): 00:23:30.460 | 1.00th=[ 22], 5.00th=[ 53], 10.00th=[ 71], 20.00th=[ 92], 00:23:30.460 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 106], 00:23:30.460 | 70.00th=[ 118], 80.00th=[ 178], 90.00th=[ 228], 95.00th=[ 247], 00:23:30.460 | 99.00th=[ 284], 99.50th=[ 347], 99.90th=[ 414], 99.95th=[ 414], 00:23:30.460 | 99.99th=[ 426] 00:23:30.460 bw ( KiB/s): min=67584, max=202240, per=9.91%, avg=130918.40, stdev=46155.12, samples=20 00:23:30.460 iops : min= 264, max= 790, avg=511.40, stdev=180.29, samples=20 00:23:30.460 lat (msec) : 10=0.06%, 20=0.71%, 50=3.90%, 100=42.59%, 250=48.50% 00:23:30.460 lat (msec) : 500=4.23% 00:23:30.460 cpu : usr=1.57%, sys=1.47%, ctx=1788, majf=0, minf=1 00:23:30.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:30.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.460 issued rwts: total=0,5177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.460 job3: (groupid=0, jobs=1): err= 0: pid=3349423: Tue Jun 11 15:08:49 2024 00:23:30.460 write: IOPS=370, BW=92.6MiB/s (97.1MB/s)(937MiB/10124msec); 0 zone resets 00:23:30.460 slat (usec): min=24, max=91355, avg=2383.12, stdev=5082.65 00:23:30.460 clat (msec): min=5, max=264, avg=170.38, stdev=49.33 00:23:30.460 lat (msec): min=5, max=264, avg=172.77, stdev=49.99 00:23:30.460 clat percentiles (msec): 00:23:30.460 | 1.00th=[ 11], 5.00th=[ 66], 10.00th=[ 121], 20.00th=[ 136], 00:23:30.460 | 30.00th=[ 157], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 186], 00:23:30.460 | 70.00th=[ 201], 80.00th=[ 213], 90.00th=[ 224], 95.00th=[ 239], 00:23:30.460 | 99.00th=[ 257], 99.50th=[ 262], 99.90th=[ 264], 99.95th=[ 266], 00:23:30.460 | 99.99th=[ 266] 00:23:30.460 bw ( KiB/s): min=67584, max=123392, per=7.14%, avg=94361.60, stdev=16674.56, samples=20 00:23:30.460 iops : min= 264, max= 482, avg=368.60, stdev=65.14, samples=20 00:23:30.460 lat (msec) : 10=0.80%, 20=1.23%, 50=2.03%, 100=3.17%, 250=90.26% 00:23:30.460 lat (msec) : 500=2.51% 00:23:30.460 cpu : usr=0.88%, sys=1.25%, ctx=1392, majf=0, minf=1 00:23:30.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:23:30.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.460 issued rwts: total=0,3749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.460 job4: (groupid=0, jobs=1): err= 0: pid=3349428: Tue Jun 11 15:08:49 2024 00:23:30.460 write: IOPS=814, BW=204MiB/s (213MB/s)(2061MiB/10125msec); 0 zone resets 00:23:30.460 slat (usec): min=20, max=334111, avg=1164.48, stdev=5237.89 00:23:30.460 clat (msec): min=4, max=564, avg=77.38, stdev=51.88 00:23:30.460 lat (msec): min=4, max=564, avg=78.55, stdev=52.50 00:23:30.460 clat percentiles (msec): 00:23:30.460 | 1.00th=[ 13], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 44], 00:23:30.460 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 63], 60.00th=[ 77], 00:23:30.460 | 70.00th=[ 100], 80.00th=[ 111], 90.00th=[ 136], 95.00th=[ 146], 00:23:30.460 | 99.00th=[ 239], 99.50th=[ 338], 99.90th=[ 550], 99.95th=[ 558], 00:23:30.460 | 99.99th=[ 567] 00:23:30.460 bw ( KiB/s): min=85504, max=368640, per=15.85%, avg=209408.00, stdev=98045.41, samples=20 00:23:30.460 iops : min= 334, max= 1440, avg=818.00, stdev=382.99, samples=20 00:23:30.460 lat (msec) : 10=0.24%, 20=2.22%, 50=45.09%, 100=23.30%, 250=28.29% 00:23:30.460 lat (msec) : 500=0.42%, 750=0.42% 00:23:30.460 cpu : usr=1.46%, sys=1.98%, ctx=2425, majf=0, minf=1 00:23:30.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:30.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.460 issued rwts: total=0,8243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.460 job5: (groupid=0, jobs=1): err= 0: pid=3349429: Tue Jun 11 15:08:49 2024 00:23:30.460 write: IOPS=396, BW=99.0MiB/s (104MB/s)(1003MiB/10126msec); 0 zone resets 00:23:30.460 slat (usec): min=25, max=55571, avg=2102.63, stdev=4451.94 00:23:30.460 clat (msec): min=3, max=310, avg=159.35, stdev=49.62 00:23:30.460 lat (msec): min=4, max=314, avg=161.45, stdev=50.34 00:23:30.460 clat percentiles (msec): 00:23:30.460 | 1.00th=[ 17], 5.00th=[ 59], 10.00th=[ 90], 20.00th=[ 129], 00:23:30.460 | 30.00th=[ 140], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 176], 00:23:30.460 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 209], 95.00th=[ 232], 00:23:30.460 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 309], 00:23:30.460 | 99.99th=[ 313] 00:23:30.460 bw ( KiB/s): min=59904, max=143360, per=7.65%, avg=101068.80, stdev=21911.66, samples=20 00:23:30.460 iops : min= 234, max= 560, avg=394.80, stdev=85.59, samples=20 00:23:30.460 lat (msec) : 4=0.02%, 10=0.10%, 20=1.35%, 50=2.34%, 100=9.15% 00:23:30.460 lat (msec) : 250=84.49%, 500=2.54% 00:23:30.460 cpu : usr=1.10%, sys=1.26%, ctx=1658, majf=0, minf=1 00:23:30.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:23:30.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.460 issued rwts: total=0,4011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.460 job6: (groupid=0, jobs=1): err= 0: pid=3349430: Tue Jun 11 15:08:49 2024 00:23:30.460 write: IOPS=359, BW=89.8MiB/s (94.2MB/s)(915MiB/10179msec); 0 zone resets 00:23:30.460 slat (usec): min=28, max=51564, avg=2575.89, stdev=5109.37 00:23:30.460 clat (msec): min=11, max=410, avg=175.42, stdev=49.48 00:23:30.460 lat (msec): min=11, max=410, avg=178.00, stdev=49.97 00:23:30.460 clat percentiles (msec): 00:23:30.460 | 1.00th=[ 37], 5.00th=[ 95], 10.00th=[ 114], 20.00th=[ 136], 00:23:30.460 | 30.00th=[ 163], 40.00th=[ 171], 50.00th=[ 176], 60.00th=[ 182], 00:23:30.460 | 70.00th=[ 197], 80.00th=[ 209], 90.00th=[ 236], 95.00th=[ 253], 00:23:30.460 | 99.00th=[ 296], 99.50th=[ 342], 99.90th=[ 397], 99.95th=[ 409], 00:23:30.460 | 99.99th=[ 409] 00:23:30.460 bw ( KiB/s): min=63488, max=141312, per=6.97%, avg=92032.00, stdev=20455.65, samples=20 00:23:30.460 iops : min= 248, max= 552, avg=359.50, stdev=79.90, samples=20 00:23:30.460 lat (msec) : 20=0.27%, 50=1.94%, 100=4.29%, 250=87.34%, 500=6.15% 00:23:30.460 cpu : usr=0.80%, sys=1.25%, ctx=1213, majf=0, minf=1 00:23:30.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:23:30.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.460 issued rwts: total=0,3658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.460 job7: (groupid=0, jobs=1): err= 0: pid=3349433: Tue Jun 11 15:08:49 2024 00:23:30.460 write: IOPS=437, BW=109MiB/s (115MB/s)(1115MiB/10199msec); 0 zone resets 00:23:30.460 slat (usec): min=23, max=84196, avg=1832.26, stdev=4520.46 00:23:30.460 clat (msec): min=3, max=458, avg=144.46, stdev=60.97 00:23:30.460 lat (msec): min=5, max=458, avg=146.30, stdev=61.75 00:23:30.460 clat percentiles (msec): 00:23:30.460 | 1.00th=[ 18], 5.00th=[ 43], 10.00th=[ 75], 20.00th=[ 96], 00:23:30.460 | 30.00th=[ 97], 40.00th=[ 128], 50.00th=[ 153], 60.00th=[ 167], 00:23:30.460 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 213], 95.00th=[ 247], 00:23:30.460 | 99.00th=[ 268], 99.50th=[ 338], 99.90th=[ 443], 99.95th=[ 443], 00:23:30.461 | 99.99th=[ 460] 00:23:30.461 bw ( KiB/s): min=61952, max=169984, per=8.52%, avg=112527.00, stdev=35383.21, samples=20 00:23:30.461 iops : min= 242, max= 664, avg=439.55, stdev=138.22, samples=20 00:23:30.461 lat (msec) : 4=0.02%, 10=0.11%, 20=1.35%, 50=4.62%, 100=29.24% 00:23:30.461 lat (msec) : 250=60.39%, 500=4.26% 00:23:30.461 cpu : usr=0.88%, sys=1.29%, ctx=1957, majf=0, minf=1 00:23:30.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:30.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.461 issued rwts: total=0,4459,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.461 job8: (groupid=0, jobs=1): err= 0: pid=3349441: Tue Jun 11 15:08:49 2024 00:23:30.461 write: IOPS=453, BW=113MiB/s (119MB/s)(1156MiB/10202msec); 0 zone resets 00:23:30.461 slat (usec): min=23, max=58955, avg=1561.59, stdev=4367.82 00:23:30.461 clat (msec): min=2, max=418, avg=139.51, stdev=81.00 00:23:30.461 lat (msec): min=2, max=418, avg=141.07, stdev=82.11 00:23:30.461 clat percentiles (msec): 00:23:30.461 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 16], 20.00th=[ 37], 00:23:30.461 | 30.00th=[ 97], 40.00th=[ 142], 50.00th=[ 165], 60.00th=[ 171], 00:23:30.461 | 70.00th=[ 182], 80.00th=[ 203], 90.00th=[ 239], 95.00th=[ 259], 00:23:30.461 | 99.00th=[ 296], 99.50th=[ 338], 99.90th=[ 405], 99.95th=[ 405], 00:23:30.461 | 99.99th=[ 418] 00:23:30.461 bw ( KiB/s): min=61440, max=280576, per=8.84%, avg=116745.00, stdev=54730.65, samples=20 00:23:30.461 iops : min= 240, max= 1096, avg=456.00, stdev=213.81, samples=20 00:23:30.461 lat (msec) : 4=0.69%, 10=5.43%, 20=7.22%, 50=9.04%, 100=8.03% 00:23:30.461 lat (msec) : 250=61.52%, 500=8.07% 00:23:30.461 cpu : usr=0.99%, sys=1.53%, ctx=2823, majf=0, minf=1 00:23:30.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:30.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.461 issued rwts: total=0,4623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.461 job9: (groupid=0, jobs=1): err= 0: pid=3349442: Tue Jun 11 15:08:49 2024 00:23:30.461 write: IOPS=469, BW=117MiB/s (123MB/s)(1182MiB/10068msec); 0 zone resets 00:23:30.461 slat (usec): min=23, max=26139, avg=1721.69, stdev=3761.86 00:23:30.461 clat (msec): min=7, max=250, avg=134.50, stdev=52.93 00:23:30.461 lat (msec): min=7, max=255, avg=136.22, stdev=53.63 00:23:30.461 clat percentiles (msec): 00:23:30.461 | 1.00th=[ 22], 5.00th=[ 44], 10.00th=[ 61], 20.00th=[ 77], 00:23:30.461 | 30.00th=[ 91], 40.00th=[ 136], 50.00th=[ 150], 60.00th=[ 165], 00:23:30.461 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 213], 00:23:30.461 | 99.00th=[ 230], 99.50th=[ 239], 99.90th=[ 247], 99.95th=[ 251], 00:23:30.461 | 99.99th=[ 251] 00:23:30.461 bw ( KiB/s): min=88576, max=214528, per=9.04%, avg=119449.60, stdev=38771.37, samples=20 00:23:30.461 iops : min= 346, max= 838, avg=466.60, stdev=151.45, samples=20 00:23:30.461 lat (msec) : 10=0.15%, 20=0.72%, 50=5.90%, 100=24.72%, 250=68.45% 00:23:30.461 lat (msec) : 500=0.06% 00:23:30.461 cpu : usr=1.00%, sys=1.56%, ctx=2013, majf=0, minf=1 00:23:30.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:30.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.461 issued rwts: total=0,4729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.461 job10: (groupid=0, jobs=1): err= 0: pid=3349443: Tue Jun 11 15:08:49 2024 00:23:30.461 write: IOPS=516, BW=129MiB/s (135MB/s)(1318MiB/10202msec); 0 zone resets 00:23:30.461 slat (usec): min=24, max=102651, avg=1650.35, stdev=3894.17 00:23:30.461 clat (msec): min=3, max=437, avg=122.17, stdev=62.78 00:23:30.461 lat (msec): min=3, max=437, avg=123.82, stdev=63.62 00:23:30.461 clat percentiles (msec): 00:23:30.461 | 1.00th=[ 15], 5.00th=[ 39], 10.00th=[ 66], 20.00th=[ 75], 00:23:30.461 | 30.00th=[ 89], 40.00th=[ 99], 50.00th=[ 103], 60.00th=[ 106], 00:23:30.461 | 70.00th=[ 157], 80.00th=[ 169], 90.00th=[ 218], 95.00th=[ 247], 00:23:30.461 | 99.00th=[ 271], 99.50th=[ 347], 99.90th=[ 426], 99.95th=[ 426], 00:23:30.461 | 99.99th=[ 439] 00:23:30.461 bw ( KiB/s): min=64000, max=224768, per=10.09%, avg=133299.20, stdev=53300.13, samples=20 00:23:30.461 iops : min= 250, max= 878, avg=520.70, stdev=208.20, samples=20 00:23:30.461 lat (msec) : 4=0.02%, 10=0.40%, 20=1.50%, 50=5.79%, 100=36.94% 00:23:30.461 lat (msec) : 250=50.85%, 500=4.50% 00:23:30.461 cpu : usr=1.07%, sys=1.51%, ctx=2165, majf=0, minf=1 00:23:30.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:30.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:30.461 issued rwts: total=0,5270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:30.461 00:23:30.461 Run status group 0 (all jobs): 00:23:30.461 WRITE: bw=1290MiB/s (1353MB/s), 89.1MiB/s-204MiB/s (93.4MB/s-213MB/s), io=12.9GiB (13.8GB), run=10068-10206msec 00:23:30.461 00:23:30.461 Disk stats (read/write): 00:23:30.461 nvme0n1: ios=49/10018, merge=0/0, ticks=257/1218350, in_queue=1218607, util=93.61% 00:23:30.461 nvme10n1: ios=49/7128, merge=0/0, ticks=4362/1179098, in_queue=1183460, util=99.99% 00:23:30.461 nvme1n1: ios=0/10208, merge=0/0, ticks=0/1208817, in_queue=1208817, util=94.46% 00:23:30.461 nvme2n1: ios=0/7344, merge=0/0, ticks=0/1215354, in_queue=1215354, util=95.00% 00:23:30.461 nvme3n1: ios=41/16330, merge=0/0, ticks=3769/1126679, in_queue=1130448, util=99.88% 00:23:30.461 nvme4n1: ios=35/7867, merge=0/0, ticks=1089/1217460, in_queue=1218549, util=100.00% 00:23:30.461 nvme5n1: ios=0/7151, merge=0/0, ticks=0/1205301, in_queue=1205301, util=97.09% 00:23:30.461 nvme6n1: ios=34/8772, merge=0/0, ticks=3517/1204843, in_queue=1208360, util=99.90% 00:23:30.461 nvme7n1: ios=29/9097, merge=0/0, ticks=1572/1215851, in_queue=1217423, util=99.91% 00:23:30.461 nvme8n1: ios=0/9269, merge=0/0, ticks=0/1223058, in_queue=1223058, util=98.86% 00:23:30.461 nvme9n1: ios=28/10402, merge=0/0, ticks=888/1210674, in_queue=1211562, util=99.90% 00:23:30.461 15:08:49 -- target/multiconnection.sh@36 -- # sync 00:23:30.461 15:08:49 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:30.461 15:08:49 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.461 15:08:49 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:31.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:31.027 15:08:49 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:31.027 15:08:49 -- common/autotest_common.sh@1198 -- # local i=0 00:23:31.027 15:08:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:31.027 15:08:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:23:31.027 15:08:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:31.027 15:08:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:23:31.027 15:08:49 -- common/autotest_common.sh@1210 -- # return 0 00:23:31.027 15:08:49 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.027 15:08:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.027 15:08:49 -- common/autotest_common.sh@10 -- # set +x 00:23:31.027 15:08:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.027 15:08:49 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.027 15:08:49 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:31.285 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:31.285 15:08:49 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:31.285 15:08:49 -- common/autotest_common.sh@1198 -- # local i=0 00:23:31.285 15:08:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:31.285 15:08:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:23:31.285 15:08:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:31.285 15:08:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:31.285 15:08:49 -- common/autotest_common.sh@1210 -- # return 0 00:23:31.285 15:08:49 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:31.285 15:08:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.285 15:08:49 -- common/autotest_common.sh@10 -- # set +x 00:23:31.285 15:08:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.285 15:08:49 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.285 15:08:49 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:31.542 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:31.542 15:08:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:31.542 15:08:50 -- common/autotest_common.sh@1198 -- # local i=0 00:23:31.542 15:08:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:31.542 15:08:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:23:31.542 15:08:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:31.542 15:08:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:31.542 15:08:50 -- common/autotest_common.sh@1210 -- # return 0 00:23:31.542 15:08:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:31.542 15:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.542 15:08:50 -- common/autotest_common.sh@10 -- # set +x 00:23:31.542 15:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.542 15:08:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.542 15:08:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:31.800 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:31.800 15:08:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:31.800 15:08:50 -- common/autotest_common.sh@1198 -- # local i=0 00:23:31.800 15:08:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:31.800 15:08:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:23:31.800 15:08:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:31.800 15:08:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:31.800 15:08:50 -- common/autotest_common.sh@1210 -- # return 0 00:23:31.800 15:08:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:31.800 15:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.800 15:08:50 -- common/autotest_common.sh@10 -- # set +x 00:23:31.800 15:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.800 15:08:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.800 15:08:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:32.058 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:32.058 15:08:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:32.058 15:08:50 -- common/autotest_common.sh@1198 -- # local i=0 00:23:32.058 15:08:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:32.058 15:08:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:23:32.058 15:08:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:32.058 15:08:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:32.058 15:08:50 -- common/autotest_common.sh@1210 -- # return 0 00:23:32.058 15:08:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:32.058 15:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.058 15:08:50 -- common/autotest_common.sh@10 -- # set +x 00:23:32.058 15:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.058 15:08:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.316 15:08:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:32.574 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:32.574 15:08:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:32.574 15:08:51 -- common/autotest_common.sh@1198 -- # local i=0 00:23:32.574 15:08:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:32.574 15:08:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:23:32.574 15:08:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:32.574 15:08:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:32.574 15:08:51 -- common/autotest_common.sh@1210 -- # return 0 00:23:32.574 15:08:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:32.574 15:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.574 15:08:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.574 15:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.574 15:08:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.574 15:08:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:32.831 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:32.831 15:08:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:32.831 15:08:51 -- common/autotest_common.sh@1198 -- # local i=0 00:23:32.831 15:08:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:32.831 15:08:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:23:32.831 15:08:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:32.831 15:08:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:32.831 15:08:51 -- common/autotest_common.sh@1210 -- # return 0 00:23:32.831 15:08:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:32.831 15:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.831 15:08:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.831 15:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.831 15:08:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.831 15:08:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:33.090 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:33.090 15:08:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:33.090 15:08:51 -- common/autotest_common.sh@1198 -- # local i=0 00:23:33.090 15:08:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:33.090 15:08:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:23:33.090 15:08:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:33.090 15:08:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:33.090 15:08:51 -- common/autotest_common.sh@1210 -- # return 0 00:23:33.090 15:08:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:33.090 15:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.090 15:08:51 -- common/autotest_common.sh@10 -- # set +x 00:23:33.090 15:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.090 15:08:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.090 15:08:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:33.348 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:33.348 15:08:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:33.348 15:08:52 -- common/autotest_common.sh@1198 -- # local i=0 00:23:33.348 15:08:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:33.348 15:08:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:23:33.348 15:08:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:33.348 15:08:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:33.348 15:08:52 -- common/autotest_common.sh@1210 -- # return 0 00:23:33.348 15:08:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:33.349 15:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.349 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:23:33.349 15:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.349 15:08:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.349 15:08:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:33.349 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:33.349 15:08:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:33.349 15:08:52 -- common/autotest_common.sh@1198 -- # local i=0 00:23:33.349 15:08:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:33.349 15:08:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:23:33.349 15:08:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:33.349 15:08:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:33.349 15:08:52 -- common/autotest_common.sh@1210 -- # return 0 00:23:33.349 15:08:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:33.349 15:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.349 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:23:33.349 15:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.349 15:08:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.349 15:08:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:33.606 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:33.606 15:08:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:33.606 15:08:52 -- common/autotest_common.sh@1198 -- # local i=0 00:23:33.606 15:08:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:33.606 15:08:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:23:33.606 15:08:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:33.606 15:08:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:33.606 15:08:52 -- common/autotest_common.sh@1210 -- # return 0 00:23:33.606 15:08:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:33.606 15:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.606 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:23:33.606 15:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.606 15:08:52 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:33.606 15:08:52 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:33.606 15:08:52 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:33.606 15:08:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:33.606 15:08:52 -- nvmf/common.sh@116 -- # sync 00:23:33.606 15:08:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:33.606 15:08:52 -- nvmf/common.sh@119 -- # set +e 00:23:33.606 15:08:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:33.606 15:08:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:33.606 rmmod nvme_tcp 00:23:33.606 rmmod nvme_fabrics 00:23:33.606 rmmod nvme_keyring 00:23:33.606 15:08:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:33.606 15:08:52 -- nvmf/common.sh@123 -- # set -e 00:23:33.606 15:08:52 -- nvmf/common.sh@124 -- # return 0 00:23:33.606 15:08:52 -- nvmf/common.sh@477 -- # '[' -n 3340034 ']' 00:23:33.606 15:08:52 -- nvmf/common.sh@478 -- # killprocess 3340034 00:23:33.606 15:08:52 -- common/autotest_common.sh@926 -- # '[' -z 3340034 ']' 00:23:33.606 15:08:52 -- common/autotest_common.sh@930 -- # kill -0 3340034 00:23:33.606 15:08:52 -- common/autotest_common.sh@931 -- # uname 00:23:33.606 15:08:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:33.606 15:08:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3340034 00:23:33.606 15:08:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:33.606 15:08:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:33.606 15:08:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3340034' 00:23:33.606 killing process with pid 3340034 00:23:33.606 15:08:52 -- common/autotest_common.sh@945 -- # kill 3340034 00:23:33.606 15:08:52 -- common/autotest_common.sh@950 -- # wait 3340034 00:23:34.173 15:08:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:34.173 15:08:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:34.173 15:08:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:34.173 15:08:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.173 15:08:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:34.173 15:08:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.173 15:08:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.173 15:08:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.705 15:08:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:36.705 00:23:36.705 real 1m14.812s 00:23:36.705 user 4m35.932s 00:23:36.705 sys 0m23.759s 00:23:36.705 15:08:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:36.705 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:23:36.705 ************************************ 00:23:36.705 END TEST nvmf_multiconnection 00:23:36.705 ************************************ 00:23:36.705 15:08:55 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:36.705 15:08:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:36.705 15:08:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:36.705 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:23:36.705 ************************************ 00:23:36.705 START TEST nvmf_initiator_timeout 00:23:36.705 ************************************ 00:23:36.705 15:08:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:36.705 * Looking for test storage... 00:23:36.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:36.705 15:08:55 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.705 15:08:55 -- nvmf/common.sh@7 -- # uname -s 00:23:36.705 15:08:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.705 15:08:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.705 15:08:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.705 15:08:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.705 15:08:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.705 15:08:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.705 15:08:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.705 15:08:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.705 15:08:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.705 15:08:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.705 15:08:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:36.705 15:08:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:36.705 15:08:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.705 15:08:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.705 15:08:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.705 15:08:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.705 15:08:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.705 15:08:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.705 15:08:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.705 15:08:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.705 15:08:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.705 15:08:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.705 15:08:55 -- paths/export.sh@5 -- # export PATH 00:23:36.705 15:08:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.705 15:08:55 -- nvmf/common.sh@46 -- # : 0 00:23:36.705 15:08:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:36.705 15:08:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:36.705 15:08:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:36.705 15:08:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.705 15:08:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.705 15:08:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:36.705 15:08:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:36.705 15:08:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:36.705 15:08:55 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:36.705 15:08:55 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:36.705 15:08:55 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:36.705 15:08:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:36.705 15:08:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.705 15:08:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:36.705 15:08:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:36.705 15:08:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:36.705 15:08:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.705 15:08:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.705 15:08:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.705 15:08:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:36.705 15:08:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:36.705 15:08:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:36.705 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:23:43.268 15:09:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:43.268 15:09:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:43.268 15:09:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:43.268 15:09:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:43.268 15:09:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:43.268 15:09:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:43.268 15:09:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:43.268 15:09:01 -- nvmf/common.sh@294 -- # net_devs=() 00:23:43.268 15:09:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:43.268 15:09:01 -- nvmf/common.sh@295 -- # e810=() 00:23:43.268 15:09:01 -- nvmf/common.sh@295 -- # local -ga e810 00:23:43.268 15:09:01 -- nvmf/common.sh@296 -- # x722=() 00:23:43.268 15:09:01 -- nvmf/common.sh@296 -- # local -ga x722 00:23:43.268 15:09:01 -- nvmf/common.sh@297 -- # mlx=() 00:23:43.268 15:09:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:43.268 15:09:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.268 15:09:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:43.268 15:09:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:43.268 15:09:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:43.268 15:09:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:43.268 15:09:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:43.268 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:43.268 15:09:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:43.268 15:09:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:43.268 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:43.268 15:09:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:43.268 15:09:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:43.268 15:09:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.268 15:09:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:43.268 15:09:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.268 15:09:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:43.268 Found net devices under 0000:af:00.0: cvl_0_0 00:23:43.268 15:09:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.268 15:09:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:43.268 15:09:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.268 15:09:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:43.268 15:09:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.268 15:09:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:43.268 Found net devices under 0000:af:00.1: cvl_0_1 00:23:43.268 15:09:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.268 15:09:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:43.268 15:09:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:43.268 15:09:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:43.268 15:09:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.268 15:09:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.268 15:09:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.268 15:09:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:43.268 15:09:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.268 15:09:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.268 15:09:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:43.268 15:09:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.268 15:09:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.268 15:09:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:43.268 15:09:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:43.268 15:09:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.268 15:09:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.268 15:09:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.268 15:09:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.268 15:09:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:43.268 15:09:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.268 15:09:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.268 15:09:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.268 15:09:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:43.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:23:43.268 00:23:43.268 --- 10.0.0.2 ping statistics --- 00:23:43.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.268 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:23:43.268 15:09:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:23:43.268 00:23:43.268 --- 10.0.0.1 ping statistics --- 00:23:43.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.268 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:23:43.268 15:09:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.268 15:09:01 -- nvmf/common.sh@410 -- # return 0 00:23:43.268 15:09:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:43.268 15:09:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.268 15:09:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:43.268 15:09:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.269 15:09:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:43.269 15:09:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:43.269 15:09:01 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:43.269 15:09:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:43.269 15:09:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:43.269 15:09:01 -- common/autotest_common.sh@10 -- # set +x 00:23:43.269 15:09:01 -- nvmf/common.sh@469 -- # nvmfpid=3355711 00:23:43.269 15:09:01 -- nvmf/common.sh@470 -- # waitforlisten 3355711 00:23:43.269 15:09:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:43.269 15:09:01 -- common/autotest_common.sh@819 -- # '[' -z 3355711 ']' 00:23:43.269 15:09:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.269 15:09:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:43.269 15:09:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.269 15:09:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:43.269 15:09:01 -- common/autotest_common.sh@10 -- # set +x 00:23:43.269 [2024-06-11 15:09:01.614928] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:43.269 [2024-06-11 15:09:01.614967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.269 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.269 [2024-06-11 15:09:01.696243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.269 [2024-06-11 15:09:01.784901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:43.269 [2024-06-11 15:09:01.785055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.269 [2024-06-11 15:09:01.785066] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.269 [2024-06-11 15:09:01.785075] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.269 [2024-06-11 15:09:01.785132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.269 [2024-06-11 15:09:01.785233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.269 [2024-06-11 15:09:01.785324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.269 [2024-06-11 15:09:01.785325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.845 15:09:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:43.845 15:09:02 -- common/autotest_common.sh@852 -- # return 0 00:23:43.845 15:09:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:43.845 15:09:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:43.845 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:23:43.845 15:09:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.845 15:09:02 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:43.845 15:09:02 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:43.845 15:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.845 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:23:43.845 Malloc0 00:23:43.845 15:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.845 15:09:02 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:43.845 15:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.845 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:23:43.845 Delay0 00:23:43.845 15:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.845 15:09:02 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.845 15:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.845 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:23:43.845 [2024-06-11 15:09:02.475820] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.845 15:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.845 15:09:02 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:43.845 15:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.845 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:23:43.845 15:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.845 15:09:02 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:43.845 15:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.845 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:23:43.845 15:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.845 15:09:02 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.845 15:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.845 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:23:43.845 [2024-06-11 15:09:02.504132] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.845 15:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.845 15:09:02 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:45.295 15:09:03 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:45.295 15:09:03 -- common/autotest_common.sh@1177 -- # local i=0 00:23:45.295 15:09:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:45.295 15:09:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:45.295 15:09:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:47.194 15:09:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:47.194 15:09:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:47.194 15:09:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:47.194 15:09:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:47.194 15:09:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:47.194 15:09:05 -- common/autotest_common.sh@1187 -- # return 0 00:23:47.194 15:09:05 -- target/initiator_timeout.sh@35 -- # fio_pid=3356574 00:23:47.194 15:09:05 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:47.194 15:09:05 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:47.194 [global] 00:23:47.194 thread=1 00:23:47.194 invalidate=1 00:23:47.194 rw=write 00:23:47.194 time_based=1 00:23:47.194 runtime=60 00:23:47.194 ioengine=libaio 00:23:47.194 direct=1 00:23:47.194 bs=4096 00:23:47.194 iodepth=1 00:23:47.194 norandommap=0 00:23:47.194 numjobs=1 00:23:47.194 00:23:47.194 verify_dump=1 00:23:47.194 verify_backlog=512 00:23:47.194 verify_state_save=0 00:23:47.194 do_verify=1 00:23:47.194 verify=crc32c-intel 00:23:47.194 [job0] 00:23:47.194 filename=/dev/nvme0n1 00:23:47.194 Could not set queue depth (nvme0n1) 00:23:47.451 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:47.451 fio-3.35 00:23:47.451 Starting 1 thread 00:23:50.731 15:09:08 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:50.731 15:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.731 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:23:50.731 true 00:23:50.731 15:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.731 15:09:08 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:50.731 15:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.731 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:23:50.731 true 00:23:50.731 15:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.731 15:09:08 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:50.731 15:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.731 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:23:50.731 true 00:23:50.731 15:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.731 15:09:08 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:50.731 15:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.731 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:23:50.731 true 00:23:50.731 15:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.731 15:09:08 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:53.257 15:09:11 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:53.257 15:09:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.257 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:23:53.257 true 00:23:53.257 15:09:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.257 15:09:11 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:53.257 15:09:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.257 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:23:53.257 true 00:23:53.257 15:09:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.257 15:09:11 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:53.257 15:09:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.257 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:23:53.257 true 00:23:53.257 15:09:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.257 15:09:11 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:53.257 15:09:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.257 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:23:53.257 true 00:23:53.257 15:09:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.257 15:09:11 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:53.257 15:09:11 -- target/initiator_timeout.sh@54 -- # wait 3356574 00:24:49.456 00:24:49.456 job0: (groupid=0, jobs=1): err= 0: pid=3356717: Tue Jun 11 15:10:06 2024 00:24:49.456 read: IOPS=98, BW=395KiB/s (405kB/s)(23.2MiB/60025msec) 00:24:49.456 slat (usec): min=5, max=14345, avg=14.18, stdev=237.13 00:24:49.456 clat (usec): min=344, max=41799k, avg=9817.29, stdev=542789.71 00:24:49.456 lat (usec): min=351, max=41799k, avg=9831.46, stdev=542789.89 00:24:49.456 clat percentiles (usec): 00:24:49.456 | 1.00th=[ 392], 5.00th=[ 420], 10.00th=[ 445], 00:24:49.456 | 20.00th=[ 478], 30.00th=[ 486], 40.00th=[ 498], 00:24:49.456 | 50.00th=[ 506], 60.00th=[ 523], 70.00th=[ 553], 00:24:49.456 | 80.00th=[ 586], 90.00th=[ 603], 95.00th=[ 41157], 00:24:49.456 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:24:49.456 | 99.95th=[ 42206], 99.99th=[17112761] 00:24:49.456 write: IOPS=102, BW=409KiB/s (419kB/s)(24.0MiB/60025msec); 0 zone resets 00:24:49.456 slat (nsec): min=7242, max=47507, avg=10512.87, stdev=2571.11 00:24:49.456 clat (usec): min=191, max=511, avg=262.20, stdev=40.92 00:24:49.456 lat (usec): min=199, max=551, avg=272.72, stdev=42.31 00:24:49.456 clat percentiles (usec): 00:24:49.456 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:24:49.456 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 265], 00:24:49.456 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 330], 00:24:49.456 | 99.00th=[ 383], 99.50th=[ 388], 99.90th=[ 408], 99.95th=[ 441], 00:24:49.456 | 99.99th=[ 510] 00:24:49.456 bw ( KiB/s): min= 2568, max= 6664, per=100.00%, avg=4915.20, stdev=1208.62, samples=10 00:24:49.456 iops : min= 642, max= 1666, avg=1228.80, stdev=302.16, samples=10 00:24:49.456 lat (usec) : 250=26.67%, 500=45.82%, 750=24.75%, 1000=0.01% 00:24:49.456 lat (msec) : 2=0.01%, 10=0.01%, 50=2.73%, >=2000=0.01% 00:24:49.456 cpu : usr=0.16%, sys=0.31%, ctx=12079, majf=0, minf=2 00:24:49.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:49.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.456 issued rwts: total=5931,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:49.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:49.456 00:24:49.456 Run status group 0 (all jobs): 00:24:49.456 READ: bw=395KiB/s (405kB/s), 395KiB/s-395KiB/s (405kB/s-405kB/s), io=23.2MiB (24.3MB), run=60025-60025msec 00:24:49.456 WRITE: bw=409KiB/s (419kB/s), 409KiB/s-409KiB/s (419kB/s-419kB/s), io=24.0MiB (25.2MB), run=60025-60025msec 00:24:49.456 00:24:49.456 Disk stats (read/write): 00:24:49.457 nvme0n1: ios=6026/6144, merge=0/0, ticks=16334/1539, in_queue=17873, util=99.56% 00:24:49.457 15:10:06 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:49.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:49.457 15:10:06 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:49.457 15:10:06 -- common/autotest_common.sh@1198 -- # local i=0 00:24:49.457 15:10:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:49.457 15:10:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:49.457 15:10:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:49.457 15:10:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:49.457 15:10:06 -- common/autotest_common.sh@1210 -- # return 0 00:24:49.457 15:10:06 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:49.457 15:10:06 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:49.457 nvmf hotplug test: fio successful as expected 00:24:49.457 15:10:06 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.457 15:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.457 15:10:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.457 15:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.457 15:10:06 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:49.457 15:10:06 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:49.457 15:10:06 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:49.457 15:10:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:49.457 15:10:06 -- nvmf/common.sh@116 -- # sync 00:24:49.457 15:10:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:49.457 15:10:06 -- nvmf/common.sh@119 -- # set +e 00:24:49.457 15:10:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:49.457 15:10:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:49.457 rmmod nvme_tcp 00:24:49.457 rmmod nvme_fabrics 00:24:49.457 rmmod nvme_keyring 00:24:49.457 15:10:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:49.457 15:10:06 -- nvmf/common.sh@123 -- # set -e 00:24:49.457 15:10:06 -- nvmf/common.sh@124 -- # return 0 00:24:49.457 15:10:06 -- nvmf/common.sh@477 -- # '[' -n 3355711 ']' 00:24:49.457 15:10:06 -- nvmf/common.sh@478 -- # killprocess 3355711 00:24:49.457 15:10:06 -- common/autotest_common.sh@926 -- # '[' -z 3355711 ']' 00:24:49.457 15:10:06 -- common/autotest_common.sh@930 -- # kill -0 3355711 00:24:49.457 15:10:06 -- common/autotest_common.sh@931 -- # uname 00:24:49.457 15:10:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:49.457 15:10:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3355711 00:24:49.457 15:10:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:49.457 15:10:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:49.457 15:10:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3355711' 00:24:49.457 killing process with pid 3355711 00:24:49.457 15:10:06 -- common/autotest_common.sh@945 -- # kill 3355711 00:24:49.457 15:10:06 -- common/autotest_common.sh@950 -- # wait 3355711 00:24:49.457 15:10:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:49.457 15:10:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:49.457 15:10:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:49.457 15:10:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:49.457 15:10:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:49.457 15:10:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.457 15:10:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.457 15:10:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.391 15:10:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:50.392 00:24:50.392 real 1m13.940s 00:24:50.392 user 4m31.255s 00:24:50.392 sys 0m6.891s 00:24:50.392 15:10:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.392 15:10:08 -- common/autotest_common.sh@10 -- # set +x 00:24:50.392 ************************************ 00:24:50.392 END TEST nvmf_initiator_timeout 00:24:50.392 ************************************ 00:24:50.392 15:10:08 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:50.392 15:10:08 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:50.392 15:10:08 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:50.392 15:10:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:50.392 15:10:08 -- common/autotest_common.sh@10 -- # set +x 00:24:56.953 15:10:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:56.953 15:10:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:56.953 15:10:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:56.953 15:10:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:56.953 15:10:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:56.953 15:10:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:56.953 15:10:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:56.953 15:10:15 -- nvmf/common.sh@294 -- # net_devs=() 00:24:56.953 15:10:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:56.953 15:10:15 -- nvmf/common.sh@295 -- # e810=() 00:24:56.953 15:10:15 -- nvmf/common.sh@295 -- # local -ga e810 00:24:56.953 15:10:15 -- nvmf/common.sh@296 -- # x722=() 00:24:56.953 15:10:15 -- nvmf/common.sh@296 -- # local -ga x722 00:24:56.953 15:10:15 -- nvmf/common.sh@297 -- # mlx=() 00:24:56.953 15:10:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:56.953 15:10:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.953 15:10:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:56.953 15:10:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:56.953 15:10:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:56.953 15:10:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:56.953 15:10:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:56.953 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:56.953 15:10:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:56.953 15:10:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:56.953 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:56.953 15:10:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:56.953 15:10:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:56.953 15:10:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.953 15:10:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:56.953 15:10:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.953 15:10:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:56.953 Found net devices under 0000:af:00.0: cvl_0_0 00:24:56.953 15:10:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.953 15:10:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:56.953 15:10:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.953 15:10:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:56.953 15:10:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.953 15:10:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:56.953 Found net devices under 0000:af:00.1: cvl_0_1 00:24:56.953 15:10:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.953 15:10:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:56.953 15:10:15 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.953 15:10:15 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:56.953 15:10:15 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:56.953 15:10:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:56.953 15:10:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:56.953 15:10:15 -- common/autotest_common.sh@10 -- # set +x 00:24:56.953 ************************************ 00:24:56.953 START TEST nvmf_perf_adq 00:24:56.953 ************************************ 00:24:56.953 15:10:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:56.953 * Looking for test storage... 00:24:56.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:56.953 15:10:15 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.953 15:10:15 -- nvmf/common.sh@7 -- # uname -s 00:24:56.953 15:10:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.953 15:10:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.953 15:10:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.953 15:10:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.953 15:10:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.953 15:10:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.953 15:10:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.953 15:10:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.953 15:10:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.953 15:10:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.953 15:10:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:56.953 15:10:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:56.953 15:10:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.953 15:10:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.953 15:10:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.953 15:10:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.953 15:10:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.953 15:10:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.954 15:10:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.954 15:10:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.954 15:10:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.954 15:10:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.954 15:10:15 -- paths/export.sh@5 -- # export PATH 00:24:56.954 15:10:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.954 15:10:15 -- nvmf/common.sh@46 -- # : 0 00:24:56.954 15:10:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:56.954 15:10:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:56.954 15:10:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:56.954 15:10:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.954 15:10:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.954 15:10:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:56.954 15:10:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:56.954 15:10:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:56.954 15:10:15 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:56.954 15:10:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:56.954 15:10:15 -- common/autotest_common.sh@10 -- # set +x 00:25:03.516 15:10:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:03.516 15:10:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:03.516 15:10:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:03.516 15:10:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:03.516 15:10:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:03.516 15:10:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:03.516 15:10:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:03.516 15:10:21 -- nvmf/common.sh@294 -- # net_devs=() 00:25:03.516 15:10:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:03.516 15:10:21 -- nvmf/common.sh@295 -- # e810=() 00:25:03.516 15:10:21 -- nvmf/common.sh@295 -- # local -ga e810 00:25:03.516 15:10:21 -- nvmf/common.sh@296 -- # x722=() 00:25:03.516 15:10:21 -- nvmf/common.sh@296 -- # local -ga x722 00:25:03.516 15:10:21 -- nvmf/common.sh@297 -- # mlx=() 00:25:03.516 15:10:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:03.516 15:10:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.516 15:10:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:03.516 15:10:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:03.516 15:10:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:03.516 15:10:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:03.516 15:10:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:03.516 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:03.516 15:10:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:03.516 15:10:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:03.516 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:03.516 15:10:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:03.516 15:10:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:03.516 15:10:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:03.516 15:10:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.516 15:10:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:03.516 15:10:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.516 15:10:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:03.516 Found net devices under 0000:af:00.0: cvl_0_0 00:25:03.516 15:10:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.516 15:10:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:03.516 15:10:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.516 15:10:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:03.516 15:10:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.516 15:10:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:03.516 Found net devices under 0000:af:00.1: cvl_0_1 00:25:03.516 15:10:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.516 15:10:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:03.516 15:10:21 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.516 15:10:21 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:03.516 15:10:21 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:03.516 15:10:21 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:25:03.516 15:10:21 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:03.773 15:10:22 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:06.297 15:10:24 -- target/perf_adq.sh@54 -- # sleep 5 00:25:11.560 15:10:29 -- target/perf_adq.sh@67 -- # nvmftestinit 00:25:11.560 15:10:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:11.560 15:10:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.560 15:10:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:11.560 15:10:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:11.560 15:10:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:11.560 15:10:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.560 15:10:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.561 15:10:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.561 15:10:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:11.561 15:10:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:11.561 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:11.561 15:10:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:11.561 15:10:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:11.561 15:10:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:11.561 15:10:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:11.561 15:10:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:11.561 15:10:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:11.561 15:10:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:11.561 15:10:29 -- nvmf/common.sh@294 -- # net_devs=() 00:25:11.561 15:10:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:11.561 15:10:29 -- nvmf/common.sh@295 -- # e810=() 00:25:11.561 15:10:29 -- nvmf/common.sh@295 -- # local -ga e810 00:25:11.561 15:10:29 -- nvmf/common.sh@296 -- # x722=() 00:25:11.561 15:10:29 -- nvmf/common.sh@296 -- # local -ga x722 00:25:11.561 15:10:29 -- nvmf/common.sh@297 -- # mlx=() 00:25:11.561 15:10:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:11.561 15:10:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.561 15:10:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:11.561 15:10:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:11.561 15:10:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:11.561 15:10:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.561 15:10:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:11.561 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:11.561 15:10:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.561 15:10:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:11.561 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:11.561 15:10:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:11.561 15:10:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.561 15:10:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.561 15:10:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.561 15:10:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.561 15:10:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:11.561 Found net devices under 0000:af:00.0: cvl_0_0 00:25:11.561 15:10:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.561 15:10:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.561 15:10:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.561 15:10:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.561 15:10:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.561 15:10:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:11.561 Found net devices under 0000:af:00.1: cvl_0_1 00:25:11.561 15:10:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.561 15:10:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:11.561 15:10:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:11.561 15:10:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:11.561 15:10:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.561 15:10:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.561 15:10:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.561 15:10:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:11.561 15:10:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.561 15:10:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.561 15:10:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:11.561 15:10:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.561 15:10:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.561 15:10:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:11.561 15:10:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:11.561 15:10:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.561 15:10:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.561 15:10:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.561 15:10:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.561 15:10:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:11.561 15:10:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.561 15:10:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.561 15:10:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.561 15:10:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:11.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:25:11.561 00:25:11.561 --- 10.0.0.2 ping statistics --- 00:25:11.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.561 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:25:11.561 15:10:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:25:11.561 00:25:11.561 --- 10.0.0.1 ping statistics --- 00:25:11.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.561 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:11.561 15:10:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.561 15:10:29 -- nvmf/common.sh@410 -- # return 0 00:25:11.561 15:10:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:11.561 15:10:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.561 15:10:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:11.561 15:10:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.561 15:10:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:11.561 15:10:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:11.561 15:10:29 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:11.561 15:10:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:11.561 15:10:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:11.561 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:11.561 15:10:29 -- nvmf/common.sh@469 -- # nvmfpid=3376872 00:25:11.561 15:10:29 -- nvmf/common.sh@470 -- # waitforlisten 3376872 00:25:11.561 15:10:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:11.561 15:10:29 -- common/autotest_common.sh@819 -- # '[' -z 3376872 ']' 00:25:11.561 15:10:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.561 15:10:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:11.561 15:10:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.561 15:10:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:11.561 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:25:11.561 [2024-06-11 15:10:29.919814] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:11.561 [2024-06-11 15:10:29.919867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.561 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.561 [2024-06-11 15:10:30.015979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.561 [2024-06-11 15:10:30.111277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:11.561 [2024-06-11 15:10:30.111414] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.561 [2024-06-11 15:10:30.111426] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.561 [2024-06-11 15:10:30.111436] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.561 [2024-06-11 15:10:30.111484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.561 [2024-06-11 15:10:30.111584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.561 [2024-06-11 15:10:30.111608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.561 [2024-06-11 15:10:30.111608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:12.126 15:10:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:12.126 15:10:30 -- common/autotest_common.sh@852 -- # return 0 00:25:12.126 15:10:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:12.126 15:10:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:12.126 15:10:30 -- common/autotest_common.sh@10 -- # set +x 00:25:12.126 15:10:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.126 15:10:30 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:25:12.127 15:10:30 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:12.127 15:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.127 15:10:30 -- common/autotest_common.sh@10 -- # set +x 00:25:12.127 15:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.127 15:10:30 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:12.127 15:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.127 15:10:30 -- common/autotest_common.sh@10 -- # set +x 00:25:12.385 15:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.385 15:10:31 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:12.385 15:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.385 15:10:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.385 [2024-06-11 15:10:31.008818] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.385 15:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.385 15:10:31 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:12.385 15:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.385 15:10:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.385 Malloc1 00:25:12.385 15:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.385 15:10:31 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.385 15:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.385 15:10:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.385 15:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.385 15:10:31 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:12.385 15:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.385 15:10:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.385 15:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.385 15:10:31 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.385 15:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.385 15:10:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.385 [2024-06-11 15:10:31.064717] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.385 15:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.385 15:10:31 -- target/perf_adq.sh@73 -- # perfpid=3377157 00:25:12.385 15:10:31 -- target/perf_adq.sh@74 -- # sleep 2 00:25:12.385 15:10:31 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:12.385 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.285 15:10:33 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:25:14.285 15:10:33 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:14.285 15:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.285 15:10:33 -- target/perf_adq.sh@76 -- # wc -l 00:25:14.285 15:10:33 -- common/autotest_common.sh@10 -- # set +x 00:25:14.285 15:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.285 15:10:33 -- target/perf_adq.sh@76 -- # count=4 00:25:14.285 15:10:33 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:25:14.285 15:10:33 -- target/perf_adq.sh@81 -- # wait 3377157 00:25:22.453 Initializing NVMe Controllers 00:25:22.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:22.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:22.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:22.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:22.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:22.453 Initialization complete. Launching workers. 00:25:22.453 ======================================================== 00:25:22.453 Latency(us) 00:25:22.453 Device Information : IOPS MiB/s Average min max 00:25:22.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8455.20 33.03 7569.01 1247.95 11901.80 00:25:22.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10892.00 42.55 5875.87 1655.66 11382.43 00:25:22.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8521.00 33.29 7535.62 1367.01 46441.41 00:25:22.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9030.10 35.27 7087.77 1238.62 12052.83 00:25:22.453 ======================================================== 00:25:22.453 Total : 36898.30 144.13 6943.73 1238.62 46441.41 00:25:22.453 00:25:22.453 15:10:41 -- target/perf_adq.sh@82 -- # nvmftestfini 00:25:22.453 15:10:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:22.453 15:10:41 -- nvmf/common.sh@116 -- # sync 00:25:22.453 15:10:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:22.453 15:10:41 -- nvmf/common.sh@119 -- # set +e 00:25:22.453 15:10:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:22.453 15:10:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:22.453 rmmod nvme_tcp 00:25:22.453 rmmod nvme_fabrics 00:25:22.711 rmmod nvme_keyring 00:25:22.711 15:10:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:22.711 15:10:41 -- nvmf/common.sh@123 -- # set -e 00:25:22.711 15:10:41 -- nvmf/common.sh@124 -- # return 0 00:25:22.711 15:10:41 -- nvmf/common.sh@477 -- # '[' -n 3376872 ']' 00:25:22.711 15:10:41 -- nvmf/common.sh@478 -- # killprocess 3376872 00:25:22.711 15:10:41 -- common/autotest_common.sh@926 -- # '[' -z 3376872 ']' 00:25:22.711 15:10:41 -- common/autotest_common.sh@930 -- # kill -0 3376872 00:25:22.711 15:10:41 -- common/autotest_common.sh@931 -- # uname 00:25:22.711 15:10:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:22.711 15:10:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3376872 00:25:22.711 15:10:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:22.711 15:10:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:22.711 15:10:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3376872' 00:25:22.711 killing process with pid 3376872 00:25:22.711 15:10:41 -- common/autotest_common.sh@945 -- # kill 3376872 00:25:22.711 15:10:41 -- common/autotest_common.sh@950 -- # wait 3376872 00:25:22.970 15:10:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:22.970 15:10:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:22.970 15:10:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:22.970 15:10:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.970 15:10:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:22.970 15:10:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.970 15:10:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.970 15:10:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.868 15:10:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:24.868 15:10:43 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:25:24.868 15:10:43 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:26.247 15:10:45 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:28.782 15:10:47 -- target/perf_adq.sh@54 -- # sleep 5 00:25:34.057 15:10:52 -- target/perf_adq.sh@87 -- # nvmftestinit 00:25:34.057 15:10:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:34.057 15:10:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.057 15:10:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:34.057 15:10:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:34.057 15:10:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:34.057 15:10:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.057 15:10:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.057 15:10:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.057 15:10:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:34.057 15:10:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:34.057 15:10:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:34.057 15:10:52 -- common/autotest_common.sh@10 -- # set +x 00:25:34.057 15:10:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:34.057 15:10:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:34.057 15:10:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:34.057 15:10:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:34.057 15:10:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:34.057 15:10:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:34.057 15:10:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:34.057 15:10:52 -- nvmf/common.sh@294 -- # net_devs=() 00:25:34.057 15:10:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:34.057 15:10:52 -- nvmf/common.sh@295 -- # e810=() 00:25:34.057 15:10:52 -- nvmf/common.sh@295 -- # local -ga e810 00:25:34.057 15:10:52 -- nvmf/common.sh@296 -- # x722=() 00:25:34.057 15:10:52 -- nvmf/common.sh@296 -- # local -ga x722 00:25:34.057 15:10:52 -- nvmf/common.sh@297 -- # mlx=() 00:25:34.057 15:10:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:34.057 15:10:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.057 15:10:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.058 15:10:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.058 15:10:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.058 15:10:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.058 15:10:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.058 15:10:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.058 15:10:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.058 15:10:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.058 15:10:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.058 15:10:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.058 15:10:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:34.058 15:10:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:34.058 15:10:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:34.058 15:10:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:34.058 15:10:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:34.058 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:34.058 15:10:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:34.058 15:10:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:34.058 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:34.058 15:10:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:34.058 15:10:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:34.058 15:10:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.058 15:10:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:34.058 15:10:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.058 15:10:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:34.058 Found net devices under 0000:af:00.0: cvl_0_0 00:25:34.058 15:10:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.058 15:10:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:34.058 15:10:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.058 15:10:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:34.058 15:10:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.058 15:10:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:34.058 Found net devices under 0000:af:00.1: cvl_0_1 00:25:34.058 15:10:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.058 15:10:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:34.058 15:10:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:34.058 15:10:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:34.058 15:10:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.058 15:10:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.058 15:10:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.058 15:10:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:34.058 15:10:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.058 15:10:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.058 15:10:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:34.058 15:10:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.058 15:10:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.058 15:10:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:34.058 15:10:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:34.058 15:10:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.058 15:10:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.058 15:10:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.058 15:10:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.058 15:10:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:34.058 15:10:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.058 15:10:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.058 15:10:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.058 15:10:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:34.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:25:34.058 00:25:34.058 --- 10.0.0.2 ping statistics --- 00:25:34.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.058 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:25:34.058 15:10:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:25:34.058 00:25:34.058 --- 10.0.0.1 ping statistics --- 00:25:34.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.058 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:25:34.058 15:10:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.058 15:10:52 -- nvmf/common.sh@410 -- # return 0 00:25:34.058 15:10:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:34.058 15:10:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.058 15:10:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:34.058 15:10:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.058 15:10:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:34.058 15:10:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:34.058 15:10:52 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:25:34.058 15:10:52 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:34.058 15:10:52 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:34.058 15:10:52 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:34.058 net.core.busy_poll = 1 00:25:34.058 15:10:52 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:34.058 net.core.busy_read = 1 00:25:34.058 15:10:52 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:34.058 15:10:52 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:34.058 15:10:52 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:34.058 15:10:52 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:34.058 15:10:52 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:34.058 15:10:52 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:34.058 15:10:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:34.058 15:10:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:34.059 15:10:52 -- common/autotest_common.sh@10 -- # set +x 00:25:34.059 15:10:52 -- nvmf/common.sh@469 -- # nvmfpid=3381260 00:25:34.059 15:10:52 -- nvmf/common.sh@470 -- # waitforlisten 3381260 00:25:34.059 15:10:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:34.059 15:10:52 -- common/autotest_common.sh@819 -- # '[' -z 3381260 ']' 00:25:34.059 15:10:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.059 15:10:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:34.059 15:10:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.059 15:10:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:34.059 15:10:52 -- common/autotest_common.sh@10 -- # set +x 00:25:34.059 [2024-06-11 15:10:52.722629] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:34.059 [2024-06-11 15:10:52.722686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.059 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.059 [2024-06-11 15:10:52.815999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.316 [2024-06-11 15:10:52.903074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:34.316 [2024-06-11 15:10:52.903219] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.316 [2024-06-11 15:10:52.903232] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.316 [2024-06-11 15:10:52.903242] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.316 [2024-06-11 15:10:52.903293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.316 [2024-06-11 15:10:52.903407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.316 [2024-06-11 15:10:52.903526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.316 [2024-06-11 15:10:52.903527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.882 15:10:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:34.882 15:10:53 -- common/autotest_common.sh@852 -- # return 0 00:25:34.882 15:10:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:34.882 15:10:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:34.882 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:34.882 15:10:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.882 15:10:53 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:25:34.882 15:10:53 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:34.882 15:10:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.882 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:34.882 15:10:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.882 15:10:53 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:34.882 15:10:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.882 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:35.140 15:10:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.140 15:10:53 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:35.140 15:10:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.140 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:35.140 [2024-06-11 15:10:53.805211] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.140 15:10:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.140 15:10:53 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:35.140 15:10:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.140 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:35.140 Malloc1 00:25:35.140 15:10:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.140 15:10:53 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:35.140 15:10:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.140 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:35.140 15:10:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.140 15:10:53 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:35.140 15:10:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.140 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:35.140 15:10:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.140 15:10:53 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.140 15:10:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.140 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:35.140 [2024-06-11 15:10:53.857167] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.140 15:10:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.140 15:10:53 -- target/perf_adq.sh@94 -- # perfpid=3381544 00:25:35.140 15:10:53 -- target/perf_adq.sh@95 -- # sleep 2 00:25:35.140 15:10:53 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:35.140 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.037 15:10:55 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:25:37.037 15:10:55 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:37.037 15:10:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.037 15:10:55 -- target/perf_adq.sh@97 -- # wc -l 00:25:37.037 15:10:55 -- common/autotest_common.sh@10 -- # set +x 00:25:37.294 15:10:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.294 15:10:55 -- target/perf_adq.sh@97 -- # count=2 00:25:37.294 15:10:55 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:25:37.294 15:10:55 -- target/perf_adq.sh@103 -- # wait 3381544 00:25:45.402 Initializing NVMe Controllers 00:25:45.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:45.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:45.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:45.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:45.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:45.402 Initialization complete. Launching workers. 00:25:45.402 ======================================================== 00:25:45.402 Latency(us) 00:25:45.402 Device Information : IOPS MiB/s Average min max 00:25:45.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5627.00 21.98 11376.58 1775.26 56391.43 00:25:45.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8428.40 32.92 7595.44 1682.39 51441.19 00:25:45.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9005.50 35.18 7133.14 1554.90 51467.45 00:25:45.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5770.70 22.54 11094.23 2435.70 58279.30 00:25:45.402 ======================================================== 00:25:45.402 Total : 28831.59 112.62 8889.29 1554.90 58279.30 00:25:45.402 00:25:45.402 15:11:04 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:45.402 15:11:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:45.402 15:11:04 -- nvmf/common.sh@116 -- # sync 00:25:45.402 15:11:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:45.402 15:11:04 -- nvmf/common.sh@119 -- # set +e 00:25:45.402 15:11:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:45.402 15:11:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:45.402 rmmod nvme_tcp 00:25:45.402 rmmod nvme_fabrics 00:25:45.402 rmmod nvme_keyring 00:25:45.402 15:11:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:45.402 15:11:04 -- nvmf/common.sh@123 -- # set -e 00:25:45.402 15:11:04 -- nvmf/common.sh@124 -- # return 0 00:25:45.402 15:11:04 -- nvmf/common.sh@477 -- # '[' -n 3381260 ']' 00:25:45.402 15:11:04 -- nvmf/common.sh@478 -- # killprocess 3381260 00:25:45.402 15:11:04 -- common/autotest_common.sh@926 -- # '[' -z 3381260 ']' 00:25:45.402 15:11:04 -- common/autotest_common.sh@930 -- # kill -0 3381260 00:25:45.402 15:11:04 -- common/autotest_common.sh@931 -- # uname 00:25:45.403 15:11:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:45.403 15:11:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3381260 00:25:45.403 15:11:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:45.403 15:11:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:45.403 15:11:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3381260' 00:25:45.403 killing process with pid 3381260 00:25:45.403 15:11:04 -- common/autotest_common.sh@945 -- # kill 3381260 00:25:45.403 15:11:04 -- common/autotest_common.sh@950 -- # wait 3381260 00:25:45.662 15:11:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:45.662 15:11:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:45.662 15:11:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:45.662 15:11:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:45.662 15:11:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:45.662 15:11:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.662 15:11:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.662 15:11:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.951 15:11:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:48.951 15:11:07 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:48.951 00:25:48.951 real 0m52.315s 00:25:48.951 user 2m49.971s 00:25:48.951 sys 0m10.602s 00:25:48.951 15:11:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.951 15:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:48.951 ************************************ 00:25:48.951 END TEST nvmf_perf_adq 00:25:48.951 ************************************ 00:25:48.951 15:11:07 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:48.951 15:11:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:48.951 15:11:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:48.951 15:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:48.951 ************************************ 00:25:48.951 START TEST nvmf_shutdown 00:25:48.951 ************************************ 00:25:48.951 15:11:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:48.951 * Looking for test storage... 00:25:48.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:48.951 15:11:07 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.951 15:11:07 -- nvmf/common.sh@7 -- # uname -s 00:25:48.951 15:11:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.951 15:11:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.951 15:11:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.951 15:11:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.951 15:11:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.951 15:11:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.951 15:11:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.951 15:11:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.951 15:11:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.951 15:11:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.951 15:11:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:48.951 15:11:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:48.951 15:11:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.951 15:11:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.951 15:11:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.951 15:11:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.951 15:11:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.951 15:11:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.951 15:11:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.951 15:11:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.951 15:11:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.952 15:11:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.952 15:11:07 -- paths/export.sh@5 -- # export PATH 00:25:48.952 15:11:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.952 15:11:07 -- nvmf/common.sh@46 -- # : 0 00:25:48.952 15:11:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:48.952 15:11:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:48.952 15:11:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:48.952 15:11:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.952 15:11:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.952 15:11:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:48.952 15:11:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:48.952 15:11:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:48.952 15:11:07 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:48.952 15:11:07 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:48.952 15:11:07 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:48.952 15:11:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:48.952 15:11:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:48.952 15:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:48.952 ************************************ 00:25:48.952 START TEST nvmf_shutdown_tc1 00:25:48.952 ************************************ 00:25:48.952 15:11:07 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:25:48.952 15:11:07 -- target/shutdown.sh@74 -- # starttarget 00:25:48.952 15:11:07 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:48.952 15:11:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:48.952 15:11:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.952 15:11:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:48.952 15:11:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:48.952 15:11:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:48.952 15:11:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.952 15:11:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.952 15:11:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.952 15:11:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:48.952 15:11:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:48.952 15:11:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:48.952 15:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:55.512 15:11:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:55.512 15:11:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:55.512 15:11:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:55.512 15:11:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:55.512 15:11:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:55.512 15:11:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:55.512 15:11:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:55.512 15:11:13 -- nvmf/common.sh@294 -- # net_devs=() 00:25:55.512 15:11:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:55.512 15:11:13 -- nvmf/common.sh@295 -- # e810=() 00:25:55.512 15:11:13 -- nvmf/common.sh@295 -- # local -ga e810 00:25:55.512 15:11:13 -- nvmf/common.sh@296 -- # x722=() 00:25:55.512 15:11:13 -- nvmf/common.sh@296 -- # local -ga x722 00:25:55.512 15:11:13 -- nvmf/common.sh@297 -- # mlx=() 00:25:55.512 15:11:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:55.512 15:11:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.512 15:11:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:55.512 15:11:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:55.512 15:11:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:55.512 15:11:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:55.512 15:11:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:55.512 15:11:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:55.512 15:11:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:55.512 15:11:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:55.512 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:55.512 15:11:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:55.512 15:11:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:55.513 15:11:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:55.513 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:55.513 15:11:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:55.513 15:11:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:55.513 15:11:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.513 15:11:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:55.513 15:11:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.513 15:11:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:55.513 Found net devices under 0000:af:00.0: cvl_0_0 00:25:55.513 15:11:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.513 15:11:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:55.513 15:11:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.513 15:11:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:55.513 15:11:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.513 15:11:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:55.513 Found net devices under 0000:af:00.1: cvl_0_1 00:25:55.513 15:11:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.513 15:11:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:55.513 15:11:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:55.513 15:11:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:55.513 15:11:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.513 15:11:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.513 15:11:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.513 15:11:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:55.513 15:11:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.513 15:11:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.513 15:11:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:55.513 15:11:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.513 15:11:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.513 15:11:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:55.513 15:11:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:55.513 15:11:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.513 15:11:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.513 15:11:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.513 15:11:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.513 15:11:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:55.513 15:11:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.513 15:11:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.513 15:11:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.513 15:11:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:55.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:25:55.513 00:25:55.513 --- 10.0.0.2 ping statistics --- 00:25:55.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.513 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:25:55.513 15:11:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:25:55.513 00:25:55.513 --- 10.0.0.1 ping statistics --- 00:25:55.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.513 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:25:55.513 15:11:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.513 15:11:13 -- nvmf/common.sh@410 -- # return 0 00:25:55.513 15:11:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:55.513 15:11:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.513 15:11:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:55.513 15:11:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.513 15:11:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:55.513 15:11:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:55.513 15:11:13 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:55.513 15:11:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:55.513 15:11:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:55.513 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:55.513 15:11:13 -- nvmf/common.sh@469 -- # nvmfpid=3387540 00:25:55.513 15:11:13 -- nvmf/common.sh@470 -- # waitforlisten 3387540 00:25:55.513 15:11:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:55.513 15:11:13 -- common/autotest_common.sh@819 -- # '[' -z 3387540 ']' 00:25:55.513 15:11:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.513 15:11:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:55.513 15:11:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.513 15:11:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:55.513 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:55.513 [2024-06-11 15:11:14.026805] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:55.513 [2024-06-11 15:11:14.026863] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.513 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.513 [2024-06-11 15:11:14.115618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.513 [2024-06-11 15:11:14.201919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:55.513 [2024-06-11 15:11:14.202075] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.513 [2024-06-11 15:11:14.202088] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.513 [2024-06-11 15:11:14.202099] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.513 [2024-06-11 15:11:14.202146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.513 [2024-06-11 15:11:14.202241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.513 [2024-06-11 15:11:14.202355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:55.513 [2024-06-11 15:11:14.202356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.079 15:11:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:56.079 15:11:14 -- common/autotest_common.sh@852 -- # return 0 00:25:56.079 15:11:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:56.079 15:11:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:56.079 15:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.337 15:11:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.337 15:11:14 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:56.337 15:11:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.337 15:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.337 [2024-06-11 15:11:14.926457] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.337 15:11:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.337 15:11:14 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:56.337 15:11:14 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:56.337 15:11:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:56.337 15:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.337 15:11:14 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:56.337 15:11:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:56.337 15:11:14 -- target/shutdown.sh@28 -- # cat 00:25:56.337 15:11:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:56.337 15:11:14 -- target/shutdown.sh@28 -- # cat 00:25:56.337 15:11:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:56.337 15:11:14 -- target/shutdown.sh@28 -- # cat 00:25:56.338 15:11:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:56.338 15:11:14 -- target/shutdown.sh@28 -- # cat 00:25:56.338 15:11:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:56.338 15:11:14 -- target/shutdown.sh@28 -- # cat 00:25:56.338 15:11:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:56.338 15:11:14 -- target/shutdown.sh@28 -- # cat 00:25:56.338 15:11:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:56.338 15:11:14 -- target/shutdown.sh@28 -- # cat 00:25:56.338 15:11:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:56.338 15:11:14 -- target/shutdown.sh@28 -- # cat 00:25:56.338 15:11:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:56.338 15:11:14 -- target/shutdown.sh@28 -- # cat 00:25:56.338 15:11:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:56.338 15:11:14 -- target/shutdown.sh@28 -- # cat 00:25:56.338 15:11:14 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:56.338 15:11:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.338 15:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.338 Malloc1 00:25:56.338 [2024-06-11 15:11:15.026365] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.338 Malloc2 00:25:56.338 Malloc3 00:25:56.338 Malloc4 00:25:56.338 Malloc5 00:25:56.596 Malloc6 00:25:56.596 Malloc7 00:25:56.596 Malloc8 00:25:56.596 Malloc9 00:25:56.596 Malloc10 00:25:56.596 15:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.596 15:11:15 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:56.596 15:11:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:56.596 15:11:15 -- common/autotest_common.sh@10 -- # set +x 00:25:56.855 15:11:15 -- target/shutdown.sh@78 -- # perfpid=3387852 00:25:56.855 15:11:15 -- target/shutdown.sh@79 -- # waitforlisten 3387852 /var/tmp/bdevperf.sock 00:25:56.855 15:11:15 -- common/autotest_common.sh@819 -- # '[' -z 3387852 ']' 00:25:56.855 15:11:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:56.855 15:11:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:56.855 15:11:15 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:56.855 15:11:15 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:56.855 15:11:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:56.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:56.855 15:11:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:56.855 15:11:15 -- nvmf/common.sh@520 -- # config=() 00:25:56.855 15:11:15 -- common/autotest_common.sh@10 -- # set +x 00:25:56.855 15:11:15 -- nvmf/common.sh@520 -- # local subsystem config 00:25:56.855 15:11:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.855 15:11:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.855 { 00:25:56.855 "params": { 00:25:56.855 "name": "Nvme$subsystem", 00:25:56.855 "trtype": "$TEST_TRANSPORT", 00:25:56.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.855 "adrfam": "ipv4", 00:25:56.855 "trsvcid": "$NVMF_PORT", 00:25:56.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.855 "hdgst": ${hdgst:-false}, 00:25:56.855 "ddgst": ${ddgst:-false} 00:25:56.855 }, 00:25:56.855 "method": "bdev_nvme_attach_controller" 00:25:56.855 } 00:25:56.855 EOF 00:25:56.855 )") 00:25:56.855 15:11:15 -- nvmf/common.sh@542 -- # cat 00:25:56.855 15:11:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.855 15:11:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.855 { 00:25:56.855 "params": { 00:25:56.855 "name": "Nvme$subsystem", 00:25:56.855 "trtype": "$TEST_TRANSPORT", 00:25:56.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.855 "adrfam": "ipv4", 00:25:56.855 "trsvcid": "$NVMF_PORT", 00:25:56.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.855 "hdgst": ${hdgst:-false}, 00:25:56.855 "ddgst": ${ddgst:-false} 00:25:56.855 }, 00:25:56.855 "method": "bdev_nvme_attach_controller" 00:25:56.855 } 00:25:56.855 EOF 00:25:56.855 )") 00:25:56.855 15:11:15 -- nvmf/common.sh@542 -- # cat 00:25:56.855 15:11:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.855 15:11:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.855 { 00:25:56.855 "params": { 00:25:56.855 "name": "Nvme$subsystem", 00:25:56.855 "trtype": "$TEST_TRANSPORT", 00:25:56.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.855 "adrfam": "ipv4", 00:25:56.855 "trsvcid": "$NVMF_PORT", 00:25:56.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.855 "hdgst": ${hdgst:-false}, 00:25:56.855 "ddgst": ${ddgst:-false} 00:25:56.855 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 } 00:25:56.856 EOF 00:25:56.856 )") 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # cat 00:25:56.856 15:11:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.856 { 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme$subsystem", 00:25:56.856 "trtype": "$TEST_TRANSPORT", 00:25:56.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "$NVMF_PORT", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.856 "hdgst": ${hdgst:-false}, 00:25:56.856 "ddgst": ${ddgst:-false} 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 } 00:25:56.856 EOF 00:25:56.856 )") 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # cat 00:25:56.856 15:11:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.856 { 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme$subsystem", 00:25:56.856 "trtype": "$TEST_TRANSPORT", 00:25:56.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "$NVMF_PORT", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.856 "hdgst": ${hdgst:-false}, 00:25:56.856 "ddgst": ${ddgst:-false} 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 } 00:25:56.856 EOF 00:25:56.856 )") 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # cat 00:25:56.856 15:11:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.856 { 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme$subsystem", 00:25:56.856 "trtype": "$TEST_TRANSPORT", 00:25:56.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "$NVMF_PORT", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.856 "hdgst": ${hdgst:-false}, 00:25:56.856 "ddgst": ${ddgst:-false} 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 } 00:25:56.856 EOF 00:25:56.856 )") 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # cat 00:25:56.856 15:11:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.856 { 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme$subsystem", 00:25:56.856 "trtype": "$TEST_TRANSPORT", 00:25:56.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "$NVMF_PORT", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.856 "hdgst": ${hdgst:-false}, 00:25:56.856 "ddgst": ${ddgst:-false} 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 } 00:25:56.856 EOF 00:25:56.856 )") 00:25:56.856 [2024-06-11 15:11:15.502985] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:56.856 [2024-06-11 15:11:15.503050] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # cat 00:25:56.856 15:11:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.856 { 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme$subsystem", 00:25:56.856 "trtype": "$TEST_TRANSPORT", 00:25:56.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "$NVMF_PORT", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.856 "hdgst": ${hdgst:-false}, 00:25:56.856 "ddgst": ${ddgst:-false} 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 } 00:25:56.856 EOF 00:25:56.856 )") 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # cat 00:25:56.856 15:11:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.856 { 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme$subsystem", 00:25:56.856 "trtype": "$TEST_TRANSPORT", 00:25:56.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "$NVMF_PORT", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.856 "hdgst": ${hdgst:-false}, 00:25:56.856 "ddgst": ${ddgst:-false} 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 } 00:25:56.856 EOF 00:25:56.856 )") 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # cat 00:25:56.856 15:11:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.856 { 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme$subsystem", 00:25:56.856 "trtype": "$TEST_TRANSPORT", 00:25:56.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "$NVMF_PORT", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.856 "hdgst": ${hdgst:-false}, 00:25:56.856 "ddgst": ${ddgst:-false} 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 } 00:25:56.856 EOF 00:25:56.856 )") 00:25:56.856 15:11:15 -- nvmf/common.sh@542 -- # cat 00:25:56.856 15:11:15 -- nvmf/common.sh@544 -- # jq . 00:25:56.856 15:11:15 -- nvmf/common.sh@545 -- # IFS=, 00:25:56.856 15:11:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme1", 00:25:56.856 "trtype": "tcp", 00:25:56.856 "traddr": "10.0.0.2", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "4420", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:56.856 "hdgst": false, 00:25:56.856 "ddgst": false 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 },{ 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme2", 00:25:56.856 "trtype": "tcp", 00:25:56.856 "traddr": "10.0.0.2", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "4420", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:56.856 "hdgst": false, 00:25:56.856 "ddgst": false 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 },{ 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme3", 00:25:56.856 "trtype": "tcp", 00:25:56.856 "traddr": "10.0.0.2", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "4420", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:56.856 "hdgst": false, 00:25:56.856 "ddgst": false 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 },{ 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme4", 00:25:56.856 "trtype": "tcp", 00:25:56.856 "traddr": "10.0.0.2", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "4420", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:56.856 "hdgst": false, 00:25:56.856 "ddgst": false 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 },{ 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme5", 00:25:56.856 "trtype": "tcp", 00:25:56.856 "traddr": "10.0.0.2", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "4420", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:56.856 "hdgst": false, 00:25:56.856 "ddgst": false 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 },{ 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme6", 00:25:56.856 "trtype": "tcp", 00:25:56.856 "traddr": "10.0.0.2", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "4420", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:56.856 "hdgst": false, 00:25:56.856 "ddgst": false 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 },{ 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme7", 00:25:56.856 "trtype": "tcp", 00:25:56.856 "traddr": "10.0.0.2", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "4420", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:56.856 "hdgst": false, 00:25:56.856 "ddgst": false 00:25:56.856 }, 00:25:56.856 "method": "bdev_nvme_attach_controller" 00:25:56.856 },{ 00:25:56.856 "params": { 00:25:56.856 "name": "Nvme8", 00:25:56.856 "trtype": "tcp", 00:25:56.856 "traddr": "10.0.0.2", 00:25:56.856 "adrfam": "ipv4", 00:25:56.856 "trsvcid": "4420", 00:25:56.856 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:56.856 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:56.857 "hdgst": false, 00:25:56.857 "ddgst": false 00:25:56.857 }, 00:25:56.857 "method": "bdev_nvme_attach_controller" 00:25:56.857 },{ 00:25:56.857 "params": { 00:25:56.857 "name": "Nvme9", 00:25:56.857 "trtype": "tcp", 00:25:56.857 "traddr": "10.0.0.2", 00:25:56.857 "adrfam": "ipv4", 00:25:56.857 "trsvcid": "4420", 00:25:56.857 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:56.857 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:56.857 "hdgst": false, 00:25:56.857 "ddgst": false 00:25:56.857 }, 00:25:56.857 "method": "bdev_nvme_attach_controller" 00:25:56.857 },{ 00:25:56.857 "params": { 00:25:56.857 "name": "Nvme10", 00:25:56.857 "trtype": "tcp", 00:25:56.857 "traddr": "10.0.0.2", 00:25:56.857 "adrfam": "ipv4", 00:25:56.857 "trsvcid": "4420", 00:25:56.857 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:56.857 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:56.857 "hdgst": false, 00:25:56.857 "ddgst": false 00:25:56.857 }, 00:25:56.857 "method": "bdev_nvme_attach_controller" 00:25:56.857 }' 00:25:56.857 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.857 [2024-06-11 15:11:15.594008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.857 [2024-06-11 15:11:15.676380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.756 15:11:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:58.756 15:11:17 -- common/autotest_common.sh@852 -- # return 0 00:25:58.756 15:11:17 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:58.756 15:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.756 15:11:17 -- common/autotest_common.sh@10 -- # set +x 00:25:58.756 15:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.756 15:11:17 -- target/shutdown.sh@83 -- # kill -9 3387852 00:25:58.756 15:11:17 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:58.756 15:11:17 -- target/shutdown.sh@87 -- # sleep 1 00:25:59.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3387852 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:59.692 15:11:18 -- target/shutdown.sh@88 -- # kill -0 3387540 00:25:59.692 15:11:18 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:59.692 15:11:18 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:59.692 15:11:18 -- nvmf/common.sh@520 -- # config=() 00:25:59.692 15:11:18 -- nvmf/common.sh@520 -- # local subsystem config 00:25:59.692 15:11:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.692 { 00:25:59.692 "params": { 00:25:59.692 "name": "Nvme$subsystem", 00:25:59.692 "trtype": "$TEST_TRANSPORT", 00:25:59.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.692 "adrfam": "ipv4", 00:25:59.692 "trsvcid": "$NVMF_PORT", 00:25:59.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.692 "hdgst": ${hdgst:-false}, 00:25:59.692 "ddgst": ${ddgst:-false} 00:25:59.692 }, 00:25:59.692 "method": "bdev_nvme_attach_controller" 00:25:59.692 } 00:25:59.692 EOF 00:25:59.692 )") 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # cat 00:25:59.692 15:11:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.692 { 00:25:59.692 "params": { 00:25:59.692 "name": "Nvme$subsystem", 00:25:59.692 "trtype": "$TEST_TRANSPORT", 00:25:59.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.692 "adrfam": "ipv4", 00:25:59.692 "trsvcid": "$NVMF_PORT", 00:25:59.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.692 "hdgst": ${hdgst:-false}, 00:25:59.692 "ddgst": ${ddgst:-false} 00:25:59.692 }, 00:25:59.692 "method": "bdev_nvme_attach_controller" 00:25:59.692 } 00:25:59.692 EOF 00:25:59.692 )") 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # cat 00:25:59.692 15:11:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.692 { 00:25:59.692 "params": { 00:25:59.692 "name": "Nvme$subsystem", 00:25:59.692 "trtype": "$TEST_TRANSPORT", 00:25:59.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.692 "adrfam": "ipv4", 00:25:59.692 "trsvcid": "$NVMF_PORT", 00:25:59.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.692 "hdgst": ${hdgst:-false}, 00:25:59.692 "ddgst": ${ddgst:-false} 00:25:59.692 }, 00:25:59.692 "method": "bdev_nvme_attach_controller" 00:25:59.692 } 00:25:59.692 EOF 00:25:59.692 )") 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # cat 00:25:59.692 15:11:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.692 { 00:25:59.692 "params": { 00:25:59.692 "name": "Nvme$subsystem", 00:25:59.692 "trtype": "$TEST_TRANSPORT", 00:25:59.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.692 "adrfam": "ipv4", 00:25:59.692 "trsvcid": "$NVMF_PORT", 00:25:59.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.692 "hdgst": ${hdgst:-false}, 00:25:59.692 "ddgst": ${ddgst:-false} 00:25:59.692 }, 00:25:59.692 "method": "bdev_nvme_attach_controller" 00:25:59.692 } 00:25:59.692 EOF 00:25:59.692 )") 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # cat 00:25:59.692 15:11:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.692 { 00:25:59.692 "params": { 00:25:59.692 "name": "Nvme$subsystem", 00:25:59.692 "trtype": "$TEST_TRANSPORT", 00:25:59.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.692 "adrfam": "ipv4", 00:25:59.692 "trsvcid": "$NVMF_PORT", 00:25:59.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.692 "hdgst": ${hdgst:-false}, 00:25:59.692 "ddgst": ${ddgst:-false} 00:25:59.692 }, 00:25:59.692 "method": "bdev_nvme_attach_controller" 00:25:59.692 } 00:25:59.692 EOF 00:25:59.692 )") 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # cat 00:25:59.692 15:11:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.692 { 00:25:59.692 "params": { 00:25:59.692 "name": "Nvme$subsystem", 00:25:59.692 "trtype": "$TEST_TRANSPORT", 00:25:59.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.692 "adrfam": "ipv4", 00:25:59.692 "trsvcid": "$NVMF_PORT", 00:25:59.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.692 "hdgst": ${hdgst:-false}, 00:25:59.692 "ddgst": ${ddgst:-false} 00:25:59.692 }, 00:25:59.692 "method": "bdev_nvme_attach_controller" 00:25:59.692 } 00:25:59.692 EOF 00:25:59.692 )") 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # cat 00:25:59.692 15:11:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.692 15:11:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.693 { 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme$subsystem", 00:25:59.693 "trtype": "$TEST_TRANSPORT", 00:25:59.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "$NVMF_PORT", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.693 "hdgst": ${hdgst:-false}, 00:25:59.693 "ddgst": ${ddgst:-false} 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 } 00:25:59.693 EOF 00:25:59.693 )") 00:25:59.693 15:11:18 -- nvmf/common.sh@542 -- # cat 00:25:59.693 [2024-06-11 15:11:18.238698] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:59.693 [2024-06-11 15:11:18.238760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388415 ] 00:25:59.693 15:11:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.693 15:11:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.693 { 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme$subsystem", 00:25:59.693 "trtype": "$TEST_TRANSPORT", 00:25:59.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "$NVMF_PORT", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.693 "hdgst": ${hdgst:-false}, 00:25:59.693 "ddgst": ${ddgst:-false} 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 } 00:25:59.693 EOF 00:25:59.693 )") 00:25:59.693 15:11:18 -- nvmf/common.sh@542 -- # cat 00:25:59.693 15:11:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.693 15:11:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.693 { 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme$subsystem", 00:25:59.693 "trtype": "$TEST_TRANSPORT", 00:25:59.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "$NVMF_PORT", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.693 "hdgst": ${hdgst:-false}, 00:25:59.693 "ddgst": ${ddgst:-false} 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 } 00:25:59.693 EOF 00:25:59.693 )") 00:25:59.693 15:11:18 -- nvmf/common.sh@542 -- # cat 00:25:59.693 15:11:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.693 15:11:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.693 { 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme$subsystem", 00:25:59.693 "trtype": "$TEST_TRANSPORT", 00:25:59.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "$NVMF_PORT", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.693 "hdgst": ${hdgst:-false}, 00:25:59.693 "ddgst": ${ddgst:-false} 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 } 00:25:59.693 EOF 00:25:59.693 )") 00:25:59.693 15:11:18 -- nvmf/common.sh@542 -- # cat 00:25:59.693 15:11:18 -- nvmf/common.sh@544 -- # jq . 00:25:59.693 15:11:18 -- nvmf/common.sh@545 -- # IFS=, 00:25:59.693 15:11:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme1", 00:25:59.693 "trtype": "tcp", 00:25:59.693 "traddr": "10.0.0.2", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "4420", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.693 "hdgst": false, 00:25:59.693 "ddgst": false 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 },{ 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme2", 00:25:59.693 "trtype": "tcp", 00:25:59.693 "traddr": "10.0.0.2", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "4420", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:59.693 "hdgst": false, 00:25:59.693 "ddgst": false 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 },{ 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme3", 00:25:59.693 "trtype": "tcp", 00:25:59.693 "traddr": "10.0.0.2", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "4420", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:59.693 "hdgst": false, 00:25:59.693 "ddgst": false 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 },{ 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme4", 00:25:59.693 "trtype": "tcp", 00:25:59.693 "traddr": "10.0.0.2", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "4420", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:59.693 "hdgst": false, 00:25:59.693 "ddgst": false 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 },{ 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme5", 00:25:59.693 "trtype": "tcp", 00:25:59.693 "traddr": "10.0.0.2", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "4420", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:59.693 "hdgst": false, 00:25:59.693 "ddgst": false 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 },{ 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme6", 00:25:59.693 "trtype": "tcp", 00:25:59.693 "traddr": "10.0.0.2", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "4420", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:59.693 "hdgst": false, 00:25:59.693 "ddgst": false 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 },{ 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme7", 00:25:59.693 "trtype": "tcp", 00:25:59.693 "traddr": "10.0.0.2", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "4420", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:59.693 "hdgst": false, 00:25:59.693 "ddgst": false 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 },{ 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme8", 00:25:59.693 "trtype": "tcp", 00:25:59.693 "traddr": "10.0.0.2", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "4420", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:59.693 "hdgst": false, 00:25:59.693 "ddgst": false 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 },{ 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme9", 00:25:59.693 "trtype": "tcp", 00:25:59.693 "traddr": "10.0.0.2", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "4420", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:59.693 "hdgst": false, 00:25:59.693 "ddgst": false 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 },{ 00:25:59.693 "params": { 00:25:59.693 "name": "Nvme10", 00:25:59.693 "trtype": "tcp", 00:25:59.693 "traddr": "10.0.0.2", 00:25:59.693 "adrfam": "ipv4", 00:25:59.693 "trsvcid": "4420", 00:25:59.693 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:59.693 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:59.693 "hdgst": false, 00:25:59.693 "ddgst": false 00:25:59.693 }, 00:25:59.693 "method": "bdev_nvme_attach_controller" 00:25:59.693 }' 00:25:59.693 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.693 [2024-06-11 15:11:18.332405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.693 [2024-06-11 15:11:18.417699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.067 Running I/O for 1 seconds... 00:26:02.443 00:26:02.443 Latency(us) 00:26:02.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.443 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.443 Verification LBA range: start 0x0 length 0x400 00:26:02.443 Nvme1n1 : 1.10 322.27 20.14 0.00 0.00 193669.20 29431.62 181117.67 00:26:02.443 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.443 Verification LBA range: start 0x0 length 0x400 00:26:02.443 Nvme2n1 : 1.10 321.49 20.09 0.00 0.00 193017.70 26214.40 161099.40 00:26:02.443 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.443 Verification LBA range: start 0x0 length 0x400 00:26:02.443 Nvme3n1 : 1.13 351.65 21.98 0.00 0.00 175532.89 16801.05 152520.15 00:26:02.443 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.443 Verification LBA range: start 0x0 length 0x400 00:26:02.443 Nvme4n1 : 1.13 352.22 22.01 0.00 0.00 173977.57 22043.93 157286.40 00:26:02.443 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.443 Verification LBA range: start 0x0 length 0x400 00:26:02.443 Nvme5n1 : 1.10 323.98 20.25 0.00 0.00 182753.97 53620.36 142987.64 00:26:02.443 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.443 Verification LBA range: start 0x0 length 0x400 00:26:02.443 Nvme6n1 : 1.13 350.82 21.93 0.00 0.00 171596.54 21448.15 135361.63 00:26:02.443 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.443 Verification LBA range: start 0x0 length 0x400 00:26:02.443 Nvme7n1 : 1.14 349.42 21.84 0.00 0.00 170664.55 22282.24 149660.39 00:26:02.443 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.443 Verification LBA range: start 0x0 length 0x400 00:26:02.443 Nvme8n1 : 1.12 317.86 19.87 0.00 0.00 184476.37 21448.15 148707.14 00:26:02.443 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.443 Verification LBA range: start 0x0 length 0x400 00:26:02.443 Nvme9n1 : 1.13 315.01 19.69 0.00 0.00 186560.18 12571.00 168725.41 00:26:02.443 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.443 Verification LBA range: start 0x0 length 0x400 00:26:02.443 Nvme10n1 : 1.14 355.36 22.21 0.00 0.00 163777.84 15132.86 141081.13 00:26:02.443 =================================================================================================================== 00:26:02.443 Total : 3360.09 210.01 0.00 0.00 179093.29 12571.00 181117.67 00:26:02.701 15:11:21 -- target/shutdown.sh@93 -- # stoptarget 00:26:02.701 15:11:21 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:02.701 15:11:21 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:02.701 15:11:21 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:02.701 15:11:21 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:02.701 15:11:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:02.701 15:11:21 -- nvmf/common.sh@116 -- # sync 00:26:02.701 15:11:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:02.701 15:11:21 -- nvmf/common.sh@119 -- # set +e 00:26:02.701 15:11:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:02.701 15:11:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:02.701 rmmod nvme_tcp 00:26:02.701 rmmod nvme_fabrics 00:26:02.701 rmmod nvme_keyring 00:26:02.701 15:11:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:02.701 15:11:21 -- nvmf/common.sh@123 -- # set -e 00:26:02.701 15:11:21 -- nvmf/common.sh@124 -- # return 0 00:26:02.701 15:11:21 -- nvmf/common.sh@477 -- # '[' -n 3387540 ']' 00:26:02.701 15:11:21 -- nvmf/common.sh@478 -- # killprocess 3387540 00:26:02.701 15:11:21 -- common/autotest_common.sh@926 -- # '[' -z 3387540 ']' 00:26:02.702 15:11:21 -- common/autotest_common.sh@930 -- # kill -0 3387540 00:26:02.702 15:11:21 -- common/autotest_common.sh@931 -- # uname 00:26:02.702 15:11:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:02.702 15:11:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3387540 00:26:02.702 15:11:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:02.702 15:11:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:02.702 15:11:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3387540' 00:26:02.702 killing process with pid 3387540 00:26:02.702 15:11:21 -- common/autotest_common.sh@945 -- # kill 3387540 00:26:02.702 15:11:21 -- common/autotest_common.sh@950 -- # wait 3387540 00:26:03.269 15:11:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:03.269 15:11:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:03.269 15:11:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:03.269 15:11:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.269 15:11:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:03.269 15:11:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.269 15:11:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.269 15:11:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.170 15:11:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:05.170 00:26:05.170 real 0m16.301s 00:26:05.170 user 0m36.668s 00:26:05.170 sys 0m6.177s 00:26:05.170 15:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.170 15:11:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.170 ************************************ 00:26:05.170 END TEST nvmf_shutdown_tc1 00:26:05.170 ************************************ 00:26:05.170 15:11:23 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:05.170 15:11:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:05.170 15:11:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:05.170 15:11:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.170 ************************************ 00:26:05.170 START TEST nvmf_shutdown_tc2 00:26:05.170 ************************************ 00:26:05.170 15:11:23 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:26:05.170 15:11:23 -- target/shutdown.sh@98 -- # starttarget 00:26:05.170 15:11:23 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:05.170 15:11:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:05.170 15:11:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.170 15:11:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:05.170 15:11:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:05.170 15:11:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:05.170 15:11:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.170 15:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.170 15:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.170 15:11:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:05.170 15:11:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:05.170 15:11:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.170 15:11:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:05.170 15:11:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:05.170 15:11:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:05.170 15:11:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:05.170 15:11:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:05.170 15:11:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:05.170 15:11:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:05.170 15:11:23 -- nvmf/common.sh@294 -- # net_devs=() 00:26:05.170 15:11:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:05.170 15:11:23 -- nvmf/common.sh@295 -- # e810=() 00:26:05.170 15:11:23 -- nvmf/common.sh@295 -- # local -ga e810 00:26:05.170 15:11:23 -- nvmf/common.sh@296 -- # x722=() 00:26:05.170 15:11:23 -- nvmf/common.sh@296 -- # local -ga x722 00:26:05.170 15:11:23 -- nvmf/common.sh@297 -- # mlx=() 00:26:05.170 15:11:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:05.170 15:11:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.170 15:11:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:05.170 15:11:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:05.170 15:11:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:05.170 15:11:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:05.170 15:11:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:05.170 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:05.170 15:11:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:05.170 15:11:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:05.170 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:05.170 15:11:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:05.170 15:11:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:05.170 15:11:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.170 15:11:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:05.170 15:11:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.170 15:11:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:05.170 Found net devices under 0000:af:00.0: cvl_0_0 00:26:05.170 15:11:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.170 15:11:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:05.170 15:11:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.170 15:11:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:05.170 15:11:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.170 15:11:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:05.170 Found net devices under 0000:af:00.1: cvl_0_1 00:26:05.170 15:11:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.170 15:11:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:05.170 15:11:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:05.170 15:11:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:05.170 15:11:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:05.170 15:11:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.170 15:11:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.170 15:11:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.170 15:11:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:05.170 15:11:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.170 15:11:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.170 15:11:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:05.170 15:11:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.170 15:11:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.170 15:11:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:05.170 15:11:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:05.170 15:11:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.170 15:11:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.450 15:11:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.450 15:11:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.450 15:11:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:05.450 15:11:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.450 15:11:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.450 15:11:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.450 15:11:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:05.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:26:05.450 00:26:05.450 --- 10.0.0.2 ping statistics --- 00:26:05.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.450 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:26:05.450 15:11:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:26:05.450 00:26:05.450 --- 10.0.0.1 ping statistics --- 00:26:05.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.450 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:26:05.450 15:11:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.450 15:11:24 -- nvmf/common.sh@410 -- # return 0 00:26:05.450 15:11:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:05.450 15:11:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.450 15:11:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:05.450 15:11:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:05.450 15:11:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.450 15:11:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:05.450 15:11:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:05.731 15:11:24 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:05.731 15:11:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:05.731 15:11:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:05.731 15:11:24 -- common/autotest_common.sh@10 -- # set +x 00:26:05.731 15:11:24 -- nvmf/common.sh@469 -- # nvmfpid=3389590 00:26:05.731 15:11:24 -- nvmf/common.sh@470 -- # waitforlisten 3389590 00:26:05.731 15:11:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:05.731 15:11:24 -- common/autotest_common.sh@819 -- # '[' -z 3389590 ']' 00:26:05.731 15:11:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.731 15:11:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:05.731 15:11:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.731 15:11:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:05.731 15:11:24 -- common/autotest_common.sh@10 -- # set +x 00:26:05.731 [2024-06-11 15:11:24.374199] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:05.731 [2024-06-11 15:11:24.374255] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.731 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.731 [2024-06-11 15:11:24.460615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:05.731 [2024-06-11 15:11:24.548371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:05.731 [2024-06-11 15:11:24.548528] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.731 [2024-06-11 15:11:24.548539] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.731 [2024-06-11 15:11:24.548548] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.731 [2024-06-11 15:11:24.548594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.731 [2024-06-11 15:11:24.548707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.731 [2024-06-11 15:11:24.548821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:05.731 [2024-06-11 15:11:24.548821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.685 15:11:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:06.685 15:11:25 -- common/autotest_common.sh@852 -- # return 0 00:26:06.685 15:11:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:06.685 15:11:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:06.685 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:26:06.685 15:11:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.685 15:11:25 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.685 15:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.685 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:26:06.685 [2024-06-11 15:11:25.347803] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.685 15:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.685 15:11:25 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:06.685 15:11:25 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:06.685 15:11:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:06.685 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:26:06.685 15:11:25 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:06.685 15:11:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:06.685 15:11:25 -- target/shutdown.sh@28 -- # cat 00:26:06.685 15:11:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:06.685 15:11:25 -- target/shutdown.sh@28 -- # cat 00:26:06.685 15:11:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:06.685 15:11:25 -- target/shutdown.sh@28 -- # cat 00:26:06.685 15:11:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:06.685 15:11:25 -- target/shutdown.sh@28 -- # cat 00:26:06.685 15:11:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:06.685 15:11:25 -- target/shutdown.sh@28 -- # cat 00:26:06.685 15:11:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:06.685 15:11:25 -- target/shutdown.sh@28 -- # cat 00:26:06.685 15:11:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:06.685 15:11:25 -- target/shutdown.sh@28 -- # cat 00:26:06.685 15:11:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:06.685 15:11:25 -- target/shutdown.sh@28 -- # cat 00:26:06.685 15:11:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:06.685 15:11:25 -- target/shutdown.sh@28 -- # cat 00:26:06.685 15:11:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:06.685 15:11:25 -- target/shutdown.sh@28 -- # cat 00:26:06.685 15:11:25 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:06.685 15:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.685 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:26:06.685 Malloc1 00:26:06.685 [2024-06-11 15:11:25.447600] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.685 Malloc2 00:26:06.685 Malloc3 00:26:06.943 Malloc4 00:26:06.943 Malloc5 00:26:06.943 Malloc6 00:26:06.943 Malloc7 00:26:06.943 Malloc8 00:26:06.943 Malloc9 00:26:07.201 Malloc10 00:26:07.201 15:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.201 15:11:25 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:07.201 15:11:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:07.201 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:26:07.201 15:11:25 -- target/shutdown.sh@102 -- # perfpid=3389910 00:26:07.201 15:11:25 -- target/shutdown.sh@103 -- # waitforlisten 3389910 /var/tmp/bdevperf.sock 00:26:07.201 15:11:25 -- common/autotest_common.sh@819 -- # '[' -z 3389910 ']' 00:26:07.201 15:11:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:07.201 15:11:25 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:07.201 15:11:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:07.201 15:11:25 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:07.201 15:11:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:07.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:07.201 15:11:25 -- nvmf/common.sh@520 -- # config=() 00:26:07.201 15:11:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:07.201 15:11:25 -- nvmf/common.sh@520 -- # local subsystem config 00:26:07.201 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:26:07.201 15:11:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.201 15:11:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.201 { 00:26:07.201 "params": { 00:26:07.201 "name": "Nvme$subsystem", 00:26:07.201 "trtype": "$TEST_TRANSPORT", 00:26:07.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.201 "adrfam": "ipv4", 00:26:07.201 "trsvcid": "$NVMF_PORT", 00:26:07.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.201 "hdgst": ${hdgst:-false}, 00:26:07.201 "ddgst": ${ddgst:-false} 00:26:07.201 }, 00:26:07.201 "method": "bdev_nvme_attach_controller" 00:26:07.201 } 00:26:07.201 EOF 00:26:07.201 )") 00:26:07.201 15:11:25 -- nvmf/common.sh@542 -- # cat 00:26:07.201 15:11:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.201 15:11:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.201 { 00:26:07.201 "params": { 00:26:07.201 "name": "Nvme$subsystem", 00:26:07.201 "trtype": "$TEST_TRANSPORT", 00:26:07.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.201 "adrfam": "ipv4", 00:26:07.201 "trsvcid": "$NVMF_PORT", 00:26:07.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.201 "hdgst": ${hdgst:-false}, 00:26:07.201 "ddgst": ${ddgst:-false} 00:26:07.201 }, 00:26:07.201 "method": "bdev_nvme_attach_controller" 00:26:07.201 } 00:26:07.201 EOF 00:26:07.201 )") 00:26:07.201 15:11:25 -- nvmf/common.sh@542 -- # cat 00:26:07.201 15:11:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.201 15:11:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.201 { 00:26:07.201 "params": { 00:26:07.201 "name": "Nvme$subsystem", 00:26:07.201 "trtype": "$TEST_TRANSPORT", 00:26:07.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.201 "adrfam": "ipv4", 00:26:07.201 "trsvcid": "$NVMF_PORT", 00:26:07.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.201 "hdgst": ${hdgst:-false}, 00:26:07.201 "ddgst": ${ddgst:-false} 00:26:07.201 }, 00:26:07.201 "method": "bdev_nvme_attach_controller" 00:26:07.201 } 00:26:07.201 EOF 00:26:07.201 )") 00:26:07.201 15:11:25 -- nvmf/common.sh@542 -- # cat 00:26:07.201 15:11:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.201 15:11:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.201 { 00:26:07.201 "params": { 00:26:07.201 "name": "Nvme$subsystem", 00:26:07.201 "trtype": "$TEST_TRANSPORT", 00:26:07.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.201 "adrfam": "ipv4", 00:26:07.201 "trsvcid": "$NVMF_PORT", 00:26:07.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.201 "hdgst": ${hdgst:-false}, 00:26:07.201 "ddgst": ${ddgst:-false} 00:26:07.201 }, 00:26:07.201 "method": "bdev_nvme_attach_controller" 00:26:07.201 } 00:26:07.201 EOF 00:26:07.201 )") 00:26:07.201 15:11:25 -- nvmf/common.sh@542 -- # cat 00:26:07.201 15:11:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.201 15:11:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.201 { 00:26:07.201 "params": { 00:26:07.202 "name": "Nvme$subsystem", 00:26:07.202 "trtype": "$TEST_TRANSPORT", 00:26:07.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "$NVMF_PORT", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.202 "hdgst": ${hdgst:-false}, 00:26:07.202 "ddgst": ${ddgst:-false} 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 } 00:26:07.202 EOF 00:26:07.202 )") 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # cat 00:26:07.202 15:11:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.202 { 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme$subsystem", 00:26:07.202 "trtype": "$TEST_TRANSPORT", 00:26:07.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "$NVMF_PORT", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.202 "hdgst": ${hdgst:-false}, 00:26:07.202 "ddgst": ${ddgst:-false} 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 } 00:26:07.202 EOF 00:26:07.202 )") 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # cat 00:26:07.202 [2024-06-11 15:11:25.927310] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:07.202 [2024-06-11 15:11:25.927365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389910 ] 00:26:07.202 15:11:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.202 { 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme$subsystem", 00:26:07.202 "trtype": "$TEST_TRANSPORT", 00:26:07.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "$NVMF_PORT", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.202 "hdgst": ${hdgst:-false}, 00:26:07.202 "ddgst": ${ddgst:-false} 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 } 00:26:07.202 EOF 00:26:07.202 )") 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # cat 00:26:07.202 15:11:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.202 { 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme$subsystem", 00:26:07.202 "trtype": "$TEST_TRANSPORT", 00:26:07.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "$NVMF_PORT", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.202 "hdgst": ${hdgst:-false}, 00:26:07.202 "ddgst": ${ddgst:-false} 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 } 00:26:07.202 EOF 00:26:07.202 )") 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # cat 00:26:07.202 15:11:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.202 { 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme$subsystem", 00:26:07.202 "trtype": "$TEST_TRANSPORT", 00:26:07.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "$NVMF_PORT", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.202 "hdgst": ${hdgst:-false}, 00:26:07.202 "ddgst": ${ddgst:-false} 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 } 00:26:07.202 EOF 00:26:07.202 )") 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # cat 00:26:07.202 15:11:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.202 { 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme$subsystem", 00:26:07.202 "trtype": "$TEST_TRANSPORT", 00:26:07.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "$NVMF_PORT", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.202 "hdgst": ${hdgst:-false}, 00:26:07.202 "ddgst": ${ddgst:-false} 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 } 00:26:07.202 EOF 00:26:07.202 )") 00:26:07.202 15:11:25 -- nvmf/common.sh@542 -- # cat 00:26:07.202 15:11:25 -- nvmf/common.sh@544 -- # jq . 00:26:07.202 15:11:25 -- nvmf/common.sh@545 -- # IFS=, 00:26:07.202 15:11:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme1", 00:26:07.202 "trtype": "tcp", 00:26:07.202 "traddr": "10.0.0.2", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "4420", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:07.202 "hdgst": false, 00:26:07.202 "ddgst": false 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 },{ 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme2", 00:26:07.202 "trtype": "tcp", 00:26:07.202 "traddr": "10.0.0.2", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "4420", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:07.202 "hdgst": false, 00:26:07.202 "ddgst": false 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 },{ 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme3", 00:26:07.202 "trtype": "tcp", 00:26:07.202 "traddr": "10.0.0.2", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "4420", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:07.202 "hdgst": false, 00:26:07.202 "ddgst": false 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 },{ 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme4", 00:26:07.202 "trtype": "tcp", 00:26:07.202 "traddr": "10.0.0.2", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "4420", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:07.202 "hdgst": false, 00:26:07.202 "ddgst": false 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 },{ 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme5", 00:26:07.202 "trtype": "tcp", 00:26:07.202 "traddr": "10.0.0.2", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "4420", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:07.202 "hdgst": false, 00:26:07.202 "ddgst": false 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 },{ 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme6", 00:26:07.202 "trtype": "tcp", 00:26:07.202 "traddr": "10.0.0.2", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "4420", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:07.202 "hdgst": false, 00:26:07.202 "ddgst": false 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 },{ 00:26:07.202 "params": { 00:26:07.202 "name": "Nvme7", 00:26:07.202 "trtype": "tcp", 00:26:07.202 "traddr": "10.0.0.2", 00:26:07.202 "adrfam": "ipv4", 00:26:07.202 "trsvcid": "4420", 00:26:07.202 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:07.202 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:07.202 "hdgst": false, 00:26:07.202 "ddgst": false 00:26:07.202 }, 00:26:07.202 "method": "bdev_nvme_attach_controller" 00:26:07.202 },{ 00:26:07.202 "params": { 00:26:07.203 "name": "Nvme8", 00:26:07.203 "trtype": "tcp", 00:26:07.203 "traddr": "10.0.0.2", 00:26:07.203 "adrfam": "ipv4", 00:26:07.203 "trsvcid": "4420", 00:26:07.203 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:07.203 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:07.203 "hdgst": false, 00:26:07.203 "ddgst": false 00:26:07.203 }, 00:26:07.203 "method": "bdev_nvme_attach_controller" 00:26:07.203 },{ 00:26:07.203 "params": { 00:26:07.203 "name": "Nvme9", 00:26:07.203 "trtype": "tcp", 00:26:07.203 "traddr": "10.0.0.2", 00:26:07.203 "adrfam": "ipv4", 00:26:07.203 "trsvcid": "4420", 00:26:07.203 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:07.203 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:07.203 "hdgst": false, 00:26:07.203 "ddgst": false 00:26:07.203 }, 00:26:07.203 "method": "bdev_nvme_attach_controller" 00:26:07.203 },{ 00:26:07.203 "params": { 00:26:07.203 "name": "Nvme10", 00:26:07.203 "trtype": "tcp", 00:26:07.203 "traddr": "10.0.0.2", 00:26:07.203 "adrfam": "ipv4", 00:26:07.203 "trsvcid": "4420", 00:26:07.203 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:07.203 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:07.203 "hdgst": false, 00:26:07.203 "ddgst": false 00:26:07.203 }, 00:26:07.203 "method": "bdev_nvme_attach_controller" 00:26:07.203 }' 00:26:07.203 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.203 [2024-06-11 15:11:26.018478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.460 [2024-06-11 15:11:26.101768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.832 Running I/O for 10 seconds... 00:26:08.832 15:11:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:08.832 15:11:27 -- common/autotest_common.sh@852 -- # return 0 00:26:08.832 15:11:27 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:08.832 15:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.832 15:11:27 -- common/autotest_common.sh@10 -- # set +x 00:26:08.832 15:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.832 15:11:27 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:08.832 15:11:27 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:08.832 15:11:27 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:08.832 15:11:27 -- target/shutdown.sh@57 -- # local ret=1 00:26:08.832 15:11:27 -- target/shutdown.sh@58 -- # local i 00:26:08.832 15:11:27 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:08.832 15:11:27 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:08.832 15:11:27 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:08.832 15:11:27 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:08.832 15:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.832 15:11:27 -- common/autotest_common.sh@10 -- # set +x 00:26:08.832 15:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.090 15:11:27 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:09.090 15:11:27 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:09.090 15:11:27 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:09.349 15:11:27 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:09.349 15:11:27 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:09.349 15:11:27 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:09.349 15:11:27 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:09.349 15:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.349 15:11:27 -- common/autotest_common.sh@10 -- # set +x 00:26:09.349 15:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.349 15:11:27 -- target/shutdown.sh@60 -- # read_io_count=129 00:26:09.349 15:11:27 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:26:09.349 15:11:27 -- target/shutdown.sh@64 -- # ret=0 00:26:09.349 15:11:27 -- target/shutdown.sh@65 -- # break 00:26:09.349 15:11:27 -- target/shutdown.sh@69 -- # return 0 00:26:09.349 15:11:27 -- target/shutdown.sh@109 -- # killprocess 3389910 00:26:09.349 15:11:27 -- common/autotest_common.sh@926 -- # '[' -z 3389910 ']' 00:26:09.349 15:11:27 -- common/autotest_common.sh@930 -- # kill -0 3389910 00:26:09.349 15:11:27 -- common/autotest_common.sh@931 -- # uname 00:26:09.349 15:11:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:09.349 15:11:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3389910 00:26:09.349 15:11:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:09.349 15:11:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:09.349 15:11:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3389910' 00:26:09.349 killing process with pid 3389910 00:26:09.349 15:11:28 -- common/autotest_common.sh@945 -- # kill 3389910 00:26:09.349 15:11:28 -- common/autotest_common.sh@950 -- # wait 3389910 00:26:09.349 Received shutdown signal, test time was about 0.595107 seconds 00:26:09.349 00:26:09.349 Latency(us) 00:26:09.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.349 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.349 Verification LBA range: start 0x0 length 0x400 00:26:09.349 Nvme1n1 : 0.54 353.56 22.10 0.00 0.00 173954.49 18350.08 151566.89 00:26:09.349 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.349 Verification LBA range: start 0x0 length 0x400 00:26:09.349 Nvme2n1 : 0.55 344.49 21.53 0.00 0.00 174314.16 30384.87 157286.40 00:26:09.349 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.349 Verification LBA range: start 0x0 length 0x400 00:26:09.349 Nvme3n1 : 0.59 323.18 20.20 0.00 0.00 171391.53 16324.42 161099.40 00:26:09.349 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.349 Verification LBA range: start 0x0 length 0x400 00:26:09.349 Nvme4n1 : 0.56 337.50 21.09 0.00 0.00 172890.87 22043.93 171585.16 00:26:09.349 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.349 Verification LBA range: start 0x0 length 0x400 00:26:09.349 Nvme5n1 : 0.56 398.77 24.92 0.00 0.00 145327.67 7447.27 140127.88 00:26:09.349 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.349 Verification LBA range: start 0x0 length 0x400 00:26:09.349 Nvme6n1 : 0.55 348.61 21.79 0.00 0.00 159688.97 27405.96 149660.39 00:26:09.349 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.349 Verification LBA range: start 0x0 length 0x400 00:26:09.349 Nvme7n1 : 0.54 351.14 21.95 0.00 0.00 155271.40 26810.18 135361.63 00:26:09.349 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.349 Verification LBA range: start 0x0 length 0x400 00:26:09.349 Nvme8n1 : 0.54 350.10 21.88 0.00 0.00 152400.36 27167.65 126782.37 00:26:09.349 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.349 Verification LBA range: start 0x0 length 0x400 00:26:09.349 Nvme9n1 : 0.56 340.78 21.30 0.00 0.00 154169.52 26571.87 143940.89 00:26:09.349 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.349 Verification LBA range: start 0x0 length 0x400 00:26:09.349 Nvme10n1 : 0.57 338.82 21.18 0.00 0.00 154522.07 13822.14 144894.14 00:26:09.349 =================================================================================================================== 00:26:09.349 Total : 3486.94 217.93 0.00 0.00 161106.34 7447.27 171585.16 00:26:09.607 15:11:28 -- target/shutdown.sh@112 -- # sleep 1 00:26:10.982 15:11:29 -- target/shutdown.sh@113 -- # kill -0 3389590 00:26:10.982 15:11:29 -- target/shutdown.sh@115 -- # stoptarget 00:26:10.982 15:11:29 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:10.982 15:11:29 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:10.982 15:11:29 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:10.982 15:11:29 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:10.982 15:11:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:10.982 15:11:29 -- nvmf/common.sh@116 -- # sync 00:26:10.982 15:11:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:10.982 15:11:29 -- nvmf/common.sh@119 -- # set +e 00:26:10.982 15:11:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:10.982 15:11:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:10.982 rmmod nvme_tcp 00:26:10.982 rmmod nvme_fabrics 00:26:10.982 rmmod nvme_keyring 00:26:10.982 15:11:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:10.982 15:11:29 -- nvmf/common.sh@123 -- # set -e 00:26:10.982 15:11:29 -- nvmf/common.sh@124 -- # return 0 00:26:10.982 15:11:29 -- nvmf/common.sh@477 -- # '[' -n 3389590 ']' 00:26:10.982 15:11:29 -- nvmf/common.sh@478 -- # killprocess 3389590 00:26:10.982 15:11:29 -- common/autotest_common.sh@926 -- # '[' -z 3389590 ']' 00:26:10.982 15:11:29 -- common/autotest_common.sh@930 -- # kill -0 3389590 00:26:10.982 15:11:29 -- common/autotest_common.sh@931 -- # uname 00:26:10.982 15:11:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:10.982 15:11:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3389590 00:26:10.982 15:11:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:10.982 15:11:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:10.982 15:11:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3389590' 00:26:10.982 killing process with pid 3389590 00:26:10.982 15:11:29 -- common/autotest_common.sh@945 -- # kill 3389590 00:26:10.982 15:11:29 -- common/autotest_common.sh@950 -- # wait 3389590 00:26:11.241 15:11:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:11.241 15:11:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:11.241 15:11:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:11.241 15:11:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.241 15:11:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:11.241 15:11:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.241 15:11:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.241 15:11:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.779 15:11:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:13.779 00:26:13.779 real 0m8.035s 00:26:13.779 user 0m24.139s 00:26:13.779 sys 0m1.371s 00:26:13.779 15:11:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.779 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:26:13.779 ************************************ 00:26:13.779 END TEST nvmf_shutdown_tc2 00:26:13.779 ************************************ 00:26:13.779 15:11:32 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:13.779 15:11:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:13.779 15:11:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:13.779 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:26:13.779 ************************************ 00:26:13.779 START TEST nvmf_shutdown_tc3 00:26:13.779 ************************************ 00:26:13.779 15:11:32 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:26:13.779 15:11:32 -- target/shutdown.sh@120 -- # starttarget 00:26:13.779 15:11:32 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:13.779 15:11:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:13.779 15:11:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.779 15:11:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:13.779 15:11:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:13.779 15:11:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:13.779 15:11:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.779 15:11:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.779 15:11:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.779 15:11:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:13.779 15:11:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:13.779 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:26:13.779 15:11:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:13.779 15:11:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:13.779 15:11:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:13.779 15:11:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:13.779 15:11:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:13.779 15:11:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:13.779 15:11:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:13.779 15:11:32 -- nvmf/common.sh@294 -- # net_devs=() 00:26:13.779 15:11:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:13.779 15:11:32 -- nvmf/common.sh@295 -- # e810=() 00:26:13.779 15:11:32 -- nvmf/common.sh@295 -- # local -ga e810 00:26:13.779 15:11:32 -- nvmf/common.sh@296 -- # x722=() 00:26:13.779 15:11:32 -- nvmf/common.sh@296 -- # local -ga x722 00:26:13.779 15:11:32 -- nvmf/common.sh@297 -- # mlx=() 00:26:13.779 15:11:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:13.779 15:11:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.779 15:11:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:13.779 15:11:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:13.779 15:11:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:13.779 15:11:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:13.779 15:11:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:13.779 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:13.779 15:11:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:13.779 15:11:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:13.779 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:13.779 15:11:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:13.779 15:11:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:13.779 15:11:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:13.779 15:11:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.779 15:11:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:13.779 15:11:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.779 15:11:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:13.779 Found net devices under 0000:af:00.0: cvl_0_0 00:26:13.779 15:11:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.779 15:11:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:13.779 15:11:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.779 15:11:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:13.779 15:11:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.779 15:11:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:13.779 Found net devices under 0000:af:00.1: cvl_0_1 00:26:13.779 15:11:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.780 15:11:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:13.780 15:11:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:13.780 15:11:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:13.780 15:11:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:13.780 15:11:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:13.780 15:11:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.780 15:11:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.780 15:11:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.780 15:11:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:13.780 15:11:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.780 15:11:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.780 15:11:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:13.780 15:11:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.780 15:11:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.780 15:11:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:13.780 15:11:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:13.780 15:11:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.780 15:11:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.780 15:11:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.780 15:11:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.780 15:11:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:13.780 15:11:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.780 15:11:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.780 15:11:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.780 15:11:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:13.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:26:13.780 00:26:13.780 --- 10.0.0.2 ping statistics --- 00:26:13.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.780 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:26:13.780 15:11:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:26:13.780 00:26:13.780 --- 10.0.0.1 ping statistics --- 00:26:13.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.780 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:26:13.780 15:11:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.780 15:11:32 -- nvmf/common.sh@410 -- # return 0 00:26:13.780 15:11:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:13.780 15:11:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.780 15:11:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:13.780 15:11:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:13.780 15:11:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.780 15:11:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:13.780 15:11:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:13.780 15:11:32 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:13.780 15:11:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:13.780 15:11:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:13.780 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:26:13.780 15:11:32 -- nvmf/common.sh@469 -- # nvmfpid=3391200 00:26:13.780 15:11:32 -- nvmf/common.sh@470 -- # waitforlisten 3391200 00:26:13.780 15:11:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:13.780 15:11:32 -- common/autotest_common.sh@819 -- # '[' -z 3391200 ']' 00:26:13.780 15:11:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.780 15:11:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:13.780 15:11:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.780 15:11:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:13.780 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:26:13.780 [2024-06-11 15:11:32.433643] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:13.780 [2024-06-11 15:11:32.433696] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.780 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.780 [2024-06-11 15:11:32.520595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.780 [2024-06-11 15:11:32.608921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:13.780 [2024-06-11 15:11:32.609070] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.780 [2024-06-11 15:11:32.609082] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.780 [2024-06-11 15:11:32.609092] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.780 [2024-06-11 15:11:32.609196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.780 [2024-06-11 15:11:32.609309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.780 [2024-06-11 15:11:32.609422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:13.780 [2024-06-11 15:11:32.609423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.712 15:11:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:14.712 15:11:33 -- common/autotest_common.sh@852 -- # return 0 00:26:14.712 15:11:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:14.712 15:11:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:14.712 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:14.712 15:11:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.712 15:11:33 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.712 15:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.712 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:14.712 [2024-06-11 15:11:33.408790] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.712 15:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.712 15:11:33 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:14.712 15:11:33 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:14.712 15:11:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:14.712 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:14.712 15:11:33 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:14.712 15:11:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.712 15:11:33 -- target/shutdown.sh@28 -- # cat 00:26:14.712 15:11:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.712 15:11:33 -- target/shutdown.sh@28 -- # cat 00:26:14.712 15:11:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.712 15:11:33 -- target/shutdown.sh@28 -- # cat 00:26:14.712 15:11:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.712 15:11:33 -- target/shutdown.sh@28 -- # cat 00:26:14.712 15:11:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.712 15:11:33 -- target/shutdown.sh@28 -- # cat 00:26:14.712 15:11:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.712 15:11:33 -- target/shutdown.sh@28 -- # cat 00:26:14.712 15:11:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.712 15:11:33 -- target/shutdown.sh@28 -- # cat 00:26:14.712 15:11:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.712 15:11:33 -- target/shutdown.sh@28 -- # cat 00:26:14.712 15:11:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.712 15:11:33 -- target/shutdown.sh@28 -- # cat 00:26:14.712 15:11:33 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.712 15:11:33 -- target/shutdown.sh@28 -- # cat 00:26:14.712 15:11:33 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:14.712 15:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.712 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:14.712 Malloc1 00:26:14.712 [2024-06-11 15:11:33.508793] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.712 Malloc2 00:26:14.969 Malloc3 00:26:14.969 Malloc4 00:26:14.969 Malloc5 00:26:14.969 Malloc6 00:26:14.969 Malloc7 00:26:14.969 Malloc8 00:26:15.227 Malloc9 00:26:15.227 Malloc10 00:26:15.227 15:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:15.227 15:11:33 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:15.227 15:11:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:15.227 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:15.227 15:11:33 -- target/shutdown.sh@124 -- # perfpid=3391548 00:26:15.227 15:11:33 -- target/shutdown.sh@125 -- # waitforlisten 3391548 /var/tmp/bdevperf.sock 00:26:15.227 15:11:33 -- common/autotest_common.sh@819 -- # '[' -z 3391548 ']' 00:26:15.227 15:11:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:15.227 15:11:33 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:15.227 15:11:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:15.227 15:11:33 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:15.227 15:11:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:15.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:15.227 15:11:33 -- nvmf/common.sh@520 -- # config=() 00:26:15.227 15:11:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:15.227 15:11:33 -- nvmf/common.sh@520 -- # local subsystem config 00:26:15.227 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:26:15.227 15:11:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.227 { 00:26:15.227 "params": { 00:26:15.227 "name": "Nvme$subsystem", 00:26:15.227 "trtype": "$TEST_TRANSPORT", 00:26:15.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.227 "adrfam": "ipv4", 00:26:15.227 "trsvcid": "$NVMF_PORT", 00:26:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.227 "hdgst": ${hdgst:-false}, 00:26:15.227 "ddgst": ${ddgst:-false} 00:26:15.227 }, 00:26:15.227 "method": "bdev_nvme_attach_controller" 00:26:15.227 } 00:26:15.227 EOF 00:26:15.227 )") 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # cat 00:26:15.227 15:11:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.227 { 00:26:15.227 "params": { 00:26:15.227 "name": "Nvme$subsystem", 00:26:15.227 "trtype": "$TEST_TRANSPORT", 00:26:15.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.227 "adrfam": "ipv4", 00:26:15.227 "trsvcid": "$NVMF_PORT", 00:26:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.227 "hdgst": ${hdgst:-false}, 00:26:15.227 "ddgst": ${ddgst:-false} 00:26:15.227 }, 00:26:15.227 "method": "bdev_nvme_attach_controller" 00:26:15.227 } 00:26:15.227 EOF 00:26:15.227 )") 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # cat 00:26:15.227 15:11:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.227 { 00:26:15.227 "params": { 00:26:15.227 "name": "Nvme$subsystem", 00:26:15.227 "trtype": "$TEST_TRANSPORT", 00:26:15.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.227 "adrfam": "ipv4", 00:26:15.227 "trsvcid": "$NVMF_PORT", 00:26:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.227 "hdgst": ${hdgst:-false}, 00:26:15.227 "ddgst": ${ddgst:-false} 00:26:15.227 }, 00:26:15.227 "method": "bdev_nvme_attach_controller" 00:26:15.227 } 00:26:15.227 EOF 00:26:15.227 )") 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # cat 00:26:15.227 15:11:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.227 { 00:26:15.227 "params": { 00:26:15.227 "name": "Nvme$subsystem", 00:26:15.227 "trtype": "$TEST_TRANSPORT", 00:26:15.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.227 "adrfam": "ipv4", 00:26:15.227 "trsvcid": "$NVMF_PORT", 00:26:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.227 "hdgst": ${hdgst:-false}, 00:26:15.227 "ddgst": ${ddgst:-false} 00:26:15.227 }, 00:26:15.227 "method": "bdev_nvme_attach_controller" 00:26:15.227 } 00:26:15.227 EOF 00:26:15.227 )") 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # cat 00:26:15.227 15:11:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.227 { 00:26:15.227 "params": { 00:26:15.227 "name": "Nvme$subsystem", 00:26:15.227 "trtype": "$TEST_TRANSPORT", 00:26:15.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.227 "adrfam": "ipv4", 00:26:15.227 "trsvcid": "$NVMF_PORT", 00:26:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.227 "hdgst": ${hdgst:-false}, 00:26:15.227 "ddgst": ${ddgst:-false} 00:26:15.227 }, 00:26:15.227 "method": "bdev_nvme_attach_controller" 00:26:15.227 } 00:26:15.227 EOF 00:26:15.227 )") 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # cat 00:26:15.227 15:11:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.227 { 00:26:15.227 "params": { 00:26:15.227 "name": "Nvme$subsystem", 00:26:15.227 "trtype": "$TEST_TRANSPORT", 00:26:15.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.227 "adrfam": "ipv4", 00:26:15.227 "trsvcid": "$NVMF_PORT", 00:26:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.227 "hdgst": ${hdgst:-false}, 00:26:15.227 "ddgst": ${ddgst:-false} 00:26:15.227 }, 00:26:15.227 "method": "bdev_nvme_attach_controller" 00:26:15.227 } 00:26:15.227 EOF 00:26:15.227 )") 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # cat 00:26:15.227 15:11:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.227 { 00:26:15.227 "params": { 00:26:15.227 "name": "Nvme$subsystem", 00:26:15.227 "trtype": "$TEST_TRANSPORT", 00:26:15.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.227 "adrfam": "ipv4", 00:26:15.227 "trsvcid": "$NVMF_PORT", 00:26:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.227 "hdgst": ${hdgst:-false}, 00:26:15.227 "ddgst": ${ddgst:-false} 00:26:15.227 }, 00:26:15.227 "method": "bdev_nvme_attach_controller" 00:26:15.227 } 00:26:15.227 EOF 00:26:15.227 )") 00:26:15.227 [2024-06-11 15:11:34.000384] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:15.227 [2024-06-11 15:11:34.000442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391548 ] 00:26:15.227 15:11:33 -- nvmf/common.sh@542 -- # cat 00:26:15.227 15:11:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.227 15:11:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.227 { 00:26:15.227 "params": { 00:26:15.228 "name": "Nvme$subsystem", 00:26:15.228 "trtype": "$TEST_TRANSPORT", 00:26:15.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "$NVMF_PORT", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.228 "hdgst": ${hdgst:-false}, 00:26:15.228 "ddgst": ${ddgst:-false} 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 } 00:26:15.228 EOF 00:26:15.228 )") 00:26:15.228 15:11:34 -- nvmf/common.sh@542 -- # cat 00:26:15.228 15:11:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.228 15:11:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.228 { 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme$subsystem", 00:26:15.228 "trtype": "$TEST_TRANSPORT", 00:26:15.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "$NVMF_PORT", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.228 "hdgst": ${hdgst:-false}, 00:26:15.228 "ddgst": ${ddgst:-false} 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 } 00:26:15.228 EOF 00:26:15.228 )") 00:26:15.228 15:11:34 -- nvmf/common.sh@542 -- # cat 00:26:15.228 15:11:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.228 15:11:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.228 { 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme$subsystem", 00:26:15.228 "trtype": "$TEST_TRANSPORT", 00:26:15.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "$NVMF_PORT", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.228 "hdgst": ${hdgst:-false}, 00:26:15.228 "ddgst": ${ddgst:-false} 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 } 00:26:15.228 EOF 00:26:15.228 )") 00:26:15.228 15:11:34 -- nvmf/common.sh@542 -- # cat 00:26:15.228 15:11:34 -- nvmf/common.sh@544 -- # jq . 00:26:15.228 15:11:34 -- nvmf/common.sh@545 -- # IFS=, 00:26:15.228 15:11:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme1", 00:26:15.228 "trtype": "tcp", 00:26:15.228 "traddr": "10.0.0.2", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "4420", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:15.228 "hdgst": false, 00:26:15.228 "ddgst": false 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 },{ 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme2", 00:26:15.228 "trtype": "tcp", 00:26:15.228 "traddr": "10.0.0.2", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "4420", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:15.228 "hdgst": false, 00:26:15.228 "ddgst": false 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 },{ 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme3", 00:26:15.228 "trtype": "tcp", 00:26:15.228 "traddr": "10.0.0.2", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "4420", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:15.228 "hdgst": false, 00:26:15.228 "ddgst": false 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 },{ 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme4", 00:26:15.228 "trtype": "tcp", 00:26:15.228 "traddr": "10.0.0.2", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "4420", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:15.228 "hdgst": false, 00:26:15.228 "ddgst": false 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 },{ 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme5", 00:26:15.228 "trtype": "tcp", 00:26:15.228 "traddr": "10.0.0.2", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "4420", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:15.228 "hdgst": false, 00:26:15.228 "ddgst": false 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 },{ 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme6", 00:26:15.228 "trtype": "tcp", 00:26:15.228 "traddr": "10.0.0.2", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "4420", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:15.228 "hdgst": false, 00:26:15.228 "ddgst": false 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 },{ 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme7", 00:26:15.228 "trtype": "tcp", 00:26:15.228 "traddr": "10.0.0.2", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "4420", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:15.228 "hdgst": false, 00:26:15.228 "ddgst": false 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 },{ 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme8", 00:26:15.228 "trtype": "tcp", 00:26:15.228 "traddr": "10.0.0.2", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "4420", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:15.228 "hdgst": false, 00:26:15.228 "ddgst": false 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 },{ 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme9", 00:26:15.228 "trtype": "tcp", 00:26:15.228 "traddr": "10.0.0.2", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "4420", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:15.228 "hdgst": false, 00:26:15.228 "ddgst": false 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 },{ 00:26:15.228 "params": { 00:26:15.228 "name": "Nvme10", 00:26:15.228 "trtype": "tcp", 00:26:15.228 "traddr": "10.0.0.2", 00:26:15.228 "adrfam": "ipv4", 00:26:15.228 "trsvcid": "4420", 00:26:15.228 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:15.228 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:15.228 "hdgst": false, 00:26:15.228 "ddgst": false 00:26:15.228 }, 00:26:15.228 "method": "bdev_nvme_attach_controller" 00:26:15.228 }' 00:26:15.228 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.486 [2024-06-11 15:11:34.089268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.486 [2024-06-11 15:11:34.171675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.859 Running I/O for 10 seconds... 00:26:16.859 15:11:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:16.859 15:11:35 -- common/autotest_common.sh@852 -- # return 0 00:26:16.859 15:11:35 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:16.859 15:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.859 15:11:35 -- common/autotest_common.sh@10 -- # set +x 00:26:17.117 15:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.117 15:11:35 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:17.117 15:11:35 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:17.117 15:11:35 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:17.117 15:11:35 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:17.117 15:11:35 -- target/shutdown.sh@57 -- # local ret=1 00:26:17.117 15:11:35 -- target/shutdown.sh@58 -- # local i 00:26:17.117 15:11:35 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:17.117 15:11:35 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:17.117 15:11:35 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:17.117 15:11:35 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:17.117 15:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.117 15:11:35 -- common/autotest_common.sh@10 -- # set +x 00:26:17.117 15:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.117 15:11:35 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:17.117 15:11:35 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:17.117 15:11:35 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:17.382 15:11:36 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:17.382 15:11:36 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:17.382 15:11:36 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:17.382 15:11:36 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:17.382 15:11:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.382 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:26:17.382 15:11:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.382 15:11:36 -- target/shutdown.sh@60 -- # read_io_count=129 00:26:17.382 15:11:36 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:26:17.382 15:11:36 -- target/shutdown.sh@64 -- # ret=0 00:26:17.382 15:11:36 -- target/shutdown.sh@65 -- # break 00:26:17.382 15:11:36 -- target/shutdown.sh@69 -- # return 0 00:26:17.382 15:11:36 -- target/shutdown.sh@134 -- # killprocess 3391200 00:26:17.382 15:11:36 -- common/autotest_common.sh@926 -- # '[' -z 3391200 ']' 00:26:17.382 15:11:36 -- common/autotest_common.sh@930 -- # kill -0 3391200 00:26:17.382 15:11:36 -- common/autotest_common.sh@931 -- # uname 00:26:17.382 15:11:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:17.382 15:11:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3391200 00:26:17.382 15:11:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:17.382 15:11:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:17.382 15:11:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3391200' 00:26:17.382 killing process with pid 3391200 00:26:17.382 15:11:36 -- common/autotest_common.sh@945 -- # kill 3391200 00:26:17.382 15:11:36 -- common/autotest_common.sh@950 -- # wait 3391200 00:26:17.382 [2024-06-11 15:11:36.182639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.382 [2024-06-11 15:11:36.182843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.182996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.183252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695f10 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.383 [2024-06-11 15:11:36.184867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.184991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.185105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2693a30 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.384 [2024-06-11 15:11:36.187743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.187873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694370 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.188714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694800 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.188735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694800 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.385 [2024-06-11 15:11:36.190526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.190535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.190544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.190553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.190562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.190570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.190579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.190587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.190596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.190606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.190615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695140 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.191996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26955d0 is same with the state(5) to be set 00:26:17.386 [2024-06-11 15:11:36.192602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.386 [2024-06-11 15:11:36.192638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.386 [2024-06-11 15:11:36.192652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.386 [2024-06-11 15:11:36.192662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.386 [2024-06-11 15:11:36.192673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.386 [2024-06-11 15:11:36.192683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.192694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.192704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.192714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212b4b0 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.192764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.192774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.192784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.192795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.192805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.192816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.192826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.192835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d520 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.192877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.192887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.192897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.192907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.192923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.192928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.192941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.192948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125970 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.192990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.192998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.193004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with [2024-06-11 15:11:36.193010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:26:17.387 id:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.193019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with [2024-06-11 15:11:36.193023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:17.387 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.193039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-06-11 15:11:36.193045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:17.387 the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 [2024-06-11 15:11:36.193062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.387 [2024-06-11 15:11:36.193075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-06-11 15:11:36.193083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.387 the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109c50 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.387 [2024-06-11 15:11:36.193122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-06-11 15:11:36.193162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-06-11 15:11:36.193176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:17.388 the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-06-11 15:11:36.193192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with [2024-06-11 15:11:36.193224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210c9b0 is same the state(5) to be set 00:26:17.388 with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with [2024-06-11 15:11:36.193257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:26:17.388 id:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-06-11 15:11:36.193273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with [2024-06-11 15:11:36.193296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:17.388 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-06-11 15:11:36.193314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:17.388 the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd6a0 is same [2024-06-11 15:11:36.193359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2695a60 is same with with the state(5) to be set 00:26:17.388 the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124bf0 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212db20 is same with the state(5) to be set 00:26:17.388 [2024-06-11 15:11:36.193626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.388 [2024-06-11 15:11:36.193679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.388 [2024-06-11 15:11:36.193690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.389 [2024-06-11 15:11:36.193700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.193709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cde20 is same with the state(5) to be set 00:26:17.389 [2024-06-11 15:11:36.193740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.389 [2024-06-11 15:11:36.193752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.193762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.389 [2024-06-11 15:11:36.193772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.193783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.389 [2024-06-11 15:11:36.193793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.193803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.389 [2024-06-11 15:11:36.193813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.193822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219db80 is same with the state(5) to be set 00:26:17.389 [2024-06-11 15:11:36.193867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.193882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.193901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.193912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.193925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.193935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.193947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.193957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.193970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.193980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.193992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.389 [2024-06-11 15:11:36.194579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.389 [2024-06-11 15:11:36.194592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.194982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.194993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195402] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22c67c0 was disconnected and freed. reset controller. 00:26:17.390 [2024-06-11 15:11:36.195511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.390 [2024-06-11 15:11:36.195633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.390 [2024-06-11 15:11:36.195643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.195982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.195994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.196005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.196017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.196031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.196044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.196067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.196083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.196093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.196105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.196115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.196127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.196136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.196149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.196160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.196172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.196181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.196193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.196203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.391 [2024-06-11 15:11:36.210480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.391 [2024-06-11 15:11:36.210495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.392 [2024-06-11 15:11:36.210842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.392 [2024-06-11 15:11:36.210857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.210873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.210888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.210905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.210919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.210939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.210953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.210970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.210985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.211001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.211016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.211057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.211072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.211089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.211104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.211121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.211135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.211152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.211170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.211187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.211201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.211218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.211232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.211329] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20fa800 was disconnected and freed. reset controller. 00:26:17.393 [2024-06-11 15:11:36.212097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212b4b0 (9): Bad file descriptor 00:26:17.393 [2024-06-11 15:11:36.212150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206d520 (9): Bad file descriptor 00:26:17.393 [2024-06-11 15:11:36.212177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2125970 (9): Bad file descriptor 00:26:17.393 [2024-06-11 15:11:36.212206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2109c50 (9): Bad file descriptor 00:26:17.393 [2024-06-11 15:11:36.212228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210c9b0 (9): Bad file descriptor 00:26:17.393 [2024-06-11 15:11:36.212249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cd6a0 (9): Bad file descriptor 00:26:17.393 [2024-06-11 15:11:36.212271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124bf0 (9): Bad file descriptor 00:26:17.393 [2024-06-11 15:11:36.212293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212db20 (9): Bad file descriptor 00:26:17.393 [2024-06-11 15:11:36.212321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cde20 (9): Bad file descriptor 00:26:17.393 [2024-06-11 15:11:36.212343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219db80 (9): Bad file descriptor 00:26:17.393 [2024-06-11 15:11:36.216190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.393 [2024-06-11 15:11:36.216636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.393 [2024-06-11 15:11:36.216651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.216985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.216999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.394 [2024-06-11 15:11:36.217774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.394 [2024-06-11 15:11:36.217790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.395 [2024-06-11 15:11:36.217805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.217821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.217836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.217853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.217867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.217884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.217898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.217915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.217930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.217958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.217972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.217992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.218006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.218023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.218043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.218059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.218072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.218088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.218101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.218117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.218130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.218146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.660 [2024-06-11 15:11:36.218160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.660 [2024-06-11 15:11:36.218176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.218189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.218205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.218218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.218233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.218246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.218364] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22570b0 was disconnected and freed. reset controller. 00:26:17.661 [2024-06-11 15:11:36.219967] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.661 [2024-06-11 15:11:36.220007] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:17.661 [2024-06-11 15:11:36.221987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.222788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.222804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2258560 is same with the state(5) to be set 00:26:17.661 [2024-06-11 15:11:36.222873] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2258560 was disconnected and freed. reset controller. 00:26:17.661 [2024-06-11 15:11:36.224698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.661 [2024-06-11 15:11:36.224933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.661 [2024-06-11 15:11:36.224955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2109c50 with addr=10.0.0.2, port=4420 00:26:17.661 [2024-06-11 15:11:36.224970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109c50 is same with the state(5) to be set 00:26:17.661 [2024-06-11 15:11:36.225244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.661 [2024-06-11 15:11:36.225391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.661 [2024-06-11 15:11:36.225409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x210c9b0 with addr=10.0.0.2, port=4420 00:26:17.661 [2024-06-11 15:11:36.225423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210c9b0 is same with the state(5) to be set 00:26:17.661 [2024-06-11 15:11:36.225460] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.661 [2024-06-11 15:11:36.225499] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.661 [2024-06-11 15:11:36.228037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.228067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.228092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.228106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.228128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.228142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.228164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.661 [2024-06-11 15:11:36.228177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.661 [2024-06-11 15:11:36.228199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.228814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.228826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.229855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b2c50 is same with the state(5) to be set 00:26:17.662 [2024-06-11 15:11:36.229925] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21b2c50 was disconnected and freed. reset controller. 00:26:17.662 [2024-06-11 15:11:36.229937] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.662 [2024-06-11 15:11:36.229999] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:17.662 [2024-06-11 15:11:36.230185] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:17.662 [2024-06-11 15:11:36.230206] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:17.662 [2024-06-11 15:11:36.230242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2109c50 (9): Bad file descriptor 00:26:17.662 [2024-06-11 15:11:36.230258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210c9b0 (9): Bad file descriptor 00:26:17.662 [2024-06-11 15:11:36.230284] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.662 [2024-06-11 15:11:36.230302] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.662 [2024-06-11 15:11:36.230366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.662 [2024-06-11 15:11:36.230677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.662 [2024-06-11 15:11:36.230687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.230985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.230995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.663 [2024-06-11 15:11:36.231608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.663 [2024-06-11 15:11:36.231619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.231634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.231645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.231657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.231668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.231680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.231691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.231704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.231714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.231726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.231737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.231749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.231759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.231770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.231781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.231793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.231803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.231815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.231825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.231837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.231847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.233982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.233994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.664 [2024-06-11 15:11:36.234350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.664 [2024-06-11 15:11:36.234361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.234979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.234992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.235002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.235014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.235029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.235042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.235052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.235065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-06-11 15:11:36.235075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.665 [2024-06-11 15:11:36.235088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.235100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.235112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.235122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.235135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.235145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.235156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b4230 is same with the state(5) to be set 00:26:17.666 [2024-06-11 15:11:36.236582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.236987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.236998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.666 [2024-06-11 15:11:36.237436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-06-11 15:11:36.237446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.237980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.237992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.238002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.238015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.238030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.238042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.238053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.238067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.238077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.667 [2024-06-11 15:11:36.239798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.667 [2024-06-11 15:11:36.239808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.239821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.239831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.239844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.239854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.239867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.239877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.239889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.239899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.239912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.239923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.239935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.239946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.239958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.239969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.239982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.239992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.668 [2024-06-11 15:11:36.240736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.668 [2024-06-11 15:11:36.240747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.240770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.240793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.240816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.240839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.240861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.240884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.240907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.240930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.240955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.240978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.240991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.241002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.242753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.242778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.242797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.242809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.242822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.242833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.242845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.242856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.242869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.242879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.242892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.242902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.242915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.242925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.242938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.242948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.242961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.242971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.242983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.242996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.669 [2024-06-11 15:11:36.243299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.669 [2024-06-11 15:11:36.243309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.243984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.243996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.670 [2024-06-11 15:11:36.244242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.670 [2024-06-11 15:11:36.244256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.671 [2024-06-11 15:11:36.244267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.671 [2024-06-11 15:11:36.244279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.671 [2024-06-11 15:11:36.244290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.671 [2024-06-11 15:11:36.246023] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:17.671 [2024-06-11 15:11:36.246056] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:17.671 [2024-06-11 15:11:36.246070] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:17.671 [2024-06-11 15:11:36.246497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.246757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.246772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2125970 with addr=10.0.0.2, port=4420 00:26:17.671 [2024-06-11 15:11:36.246784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125970 is same with the state(5) to be set 00:26:17.671 [2024-06-11 15:11:36.247034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.247325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.247339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x212db20 with addr=10.0.0.2, port=4420 00:26:17.671 [2024-06-11 15:11:36.247351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212db20 is same with the state(5) to be set 00:26:17.671 [2024-06-11 15:11:36.247362] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.671 [2024-06-11 15:11:36.247372] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.671 [2024-06-11 15:11:36.247384] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.671 [2024-06-11 15:11:36.247405] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:17.671 [2024-06-11 15:11:36.247415] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:17.671 [2024-06-11 15:11:36.247424] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:17.671 [2024-06-11 15:11:36.247467] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.671 [2024-06-11 15:11:36.247482] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.671 [2024-06-11 15:11:36.247501] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.671 [2024-06-11 15:11:36.247514] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.671 [2024-06-11 15:11:36.247529] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.671 [2024-06-11 15:11:36.247554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212db20 (9): Bad file descriptor 00:26:17.671 [2024-06-11 15:11:36.247572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2125970 (9): Bad file descriptor 00:26:17.671 [2024-06-11 15:11:36.248307] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:17.671 [2024-06-11 15:11:36.248327] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:17.671 task offset: 24192 on job bdev=Nvme1n1 fails 00:26:17.671 00:26:17.671 Latency(us) 00:26:17.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.671 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.671 Job: Nvme1n1 ended in about 0.55 seconds with error 00:26:17.671 Verification LBA range: start 0x0 length 0x400 00:26:17.671 Nvme1n1 : 0.55 297.43 18.59 116.07 0.00 153038.58 29074.15 161099.40 00:26:17.671 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.671 Job: Nvme2n1 ended in about 0.57 seconds with error 00:26:17.671 Verification LBA range: start 0x0 length 0x400 00:26:17.671 Nvme2n1 : 0.57 220.78 13.80 112.14 0.00 186950.06 121539.49 149660.39 00:26:17.671 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.671 Job: Nvme3n1 ended in about 0.55 seconds with error 00:26:17.671 Verification LBA range: start 0x0 length 0x400 00:26:17.671 Nvme3n1 : 0.55 296.40 18.53 115.67 0.00 148116.59 22639.71 146800.64 00:26:17.671 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.671 Job: Nvme4n1 ended in about 0.56 seconds with error 00:26:17.671 Verification LBA range: start 0x0 length 0x400 00:26:17.671 Nvme4n1 : 0.56 293.31 18.33 114.46 0.00 147073.45 46232.67 130595.37 00:26:17.671 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.671 Job: Nvme5n1 ended in about 0.57 seconds with error 00:26:17.671 Verification LBA range: start 0x0 length 0x400 00:26:17.671 Nvme5n1 : 0.57 290.18 18.14 46.00 0.00 173864.58 7268.54 158239.65 00:26:17.671 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.671 Job: Nvme6n1 ended in about 0.57 seconds with error 00:26:17.671 Verification LBA range: start 0x0 length 0x400 00:26:17.671 Nvme6n1 : 0.57 289.06 18.07 45.83 0.00 171785.22 5004.57 148707.14 00:26:17.671 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.671 Job: Nvme7n1 ended in about 0.57 seconds with error 00:26:17.671 Verification LBA range: start 0x0 length 0x400 00:26:17.671 Nvme7n1 : 0.57 219.52 13.72 111.50 0.00 171716.86 114390.11 143940.89 00:26:17.671 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.671 Job: Nvme8n1 ended in about 0.58 seconds with error 00:26:17.671 Verification LBA range: start 0x0 length 0x400 00:26:17.671 Nvme8n1 : 0.58 218.41 13.65 110.94 0.00 169403.98 95801.72 145847.39 00:26:17.671 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.671 Job: Nvme9n1 ended in about 0.58 seconds with error 00:26:17.671 Verification LBA range: start 0x0 length 0x400 00:26:17.671 Nvme9n1 : 0.58 217.31 13.58 110.38 0.00 167256.65 111053.73 143940.89 00:26:17.671 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:17.671 Job: Nvme10n1 ended in about 0.58 seconds with error 00:26:17.671 Verification LBA range: start 0x0 length 0x400 00:26:17.671 Nvme10n1 : 0.58 216.08 13.51 109.76 0.00 165263.60 89128.96 146800.64 00:26:17.671 =================================================================================================================== 00:26:17.671 Total : 2558.49 159.91 992.75 0.00 164539.18 5004.57 161099.40 00:26:17.671 [2024-06-11 15:11:36.282145] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:17.671 [2024-06-11 15:11:36.282196] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:17.671 [2024-06-11 15:11:36.282217] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.671 [2024-06-11 15:11:36.282228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.671 [2024-06-11 15:11:36.282656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.283004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.283022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206d520 with addr=10.0.0.2, port=4420 00:26:17.671 [2024-06-11 15:11:36.283043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d520 is same with the state(5) to be set 00:26:17.671 [2024-06-11 15:11:36.283239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.283500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.283515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x212b4b0 with addr=10.0.0.2, port=4420 00:26:17.671 [2024-06-11 15:11:36.283526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212b4b0 is same with the state(5) to be set 00:26:17.671 [2024-06-11 15:11:36.283845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.284155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.284172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21cde20 with addr=10.0.0.2, port=4420 00:26:17.671 [2024-06-11 15:11:36.284183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cde20 is same with the state(5) to be set 00:26:17.671 [2024-06-11 15:11:36.286211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.286522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.286539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21cd6a0 with addr=10.0.0.2, port=4420 00:26:17.671 [2024-06-11 15:11:36.286551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd6a0 is same with the state(5) to be set 00:26:17.671 [2024-06-11 15:11:36.286884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.287175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.287191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219db80 with addr=10.0.0.2, port=4420 00:26:17.671 [2024-06-11 15:11:36.287203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219db80 is same with the state(5) to be set 00:26:17.671 [2024-06-11 15:11:36.287414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.287621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.671 [2024-06-11 15:11:36.287637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124bf0 with addr=10.0.0.2, port=4420 00:26:17.671 [2024-06-11 15:11:36.287647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124bf0 is same with the state(5) to be set 00:26:17.671 [2024-06-11 15:11:36.287665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206d520 (9): Bad file descriptor 00:26:17.671 [2024-06-11 15:11:36.287680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212b4b0 (9): Bad file descriptor 00:26:17.671 [2024-06-11 15:11:36.287693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cde20 (9): Bad file descriptor 00:26:17.671 [2024-06-11 15:11:36.287705] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:17.671 [2024-06-11 15:11:36.287715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:17.671 [2024-06-11 15:11:36.287732] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:17.671 [2024-06-11 15:11:36.287755] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:17.672 [2024-06-11 15:11:36.287765] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:17.672 [2024-06-11 15:11:36.287775] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:17.672 [2024-06-11 15:11:36.287831] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.672 [2024-06-11 15:11:36.287848] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.672 [2024-06-11 15:11:36.287862] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.672 [2024-06-11 15:11:36.287876] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.672 [2024-06-11 15:11:36.287891] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:17.672 [2024-06-11 15:11:36.287991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.672 [2024-06-11 15:11:36.288005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.672 [2024-06-11 15:11:36.288032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cd6a0 (9): Bad file descriptor 00:26:17.672 [2024-06-11 15:11:36.288046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219db80 (9): Bad file descriptor 00:26:17.672 [2024-06-11 15:11:36.288058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124bf0 (9): Bad file descriptor 00:26:17.672 [2024-06-11 15:11:36.288070] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:17.672 [2024-06-11 15:11:36.288079] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:17.672 [2024-06-11 15:11:36.288088] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:17.672 [2024-06-11 15:11:36.288103] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:17.672 [2024-06-11 15:11:36.288112] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:17.672 [2024-06-11 15:11:36.288121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:17.672 [2024-06-11 15:11:36.288136] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:17.672 [2024-06-11 15:11:36.288145] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:17.672 [2024-06-11 15:11:36.288154] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:17.672 [2024-06-11 15:11:36.288223] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:17.672 [2024-06-11 15:11:36.288239] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.672 [2024-06-11 15:11:36.288250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.672 [2024-06-11 15:11:36.288259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.672 [2024-06-11 15:11:36.288268] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.672 [2024-06-11 15:11:36.288291] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:17.672 [2024-06-11 15:11:36.288300] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:17.672 [2024-06-11 15:11:36.288314] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:17.672 [2024-06-11 15:11:36.288327] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:17.672 [2024-06-11 15:11:36.288338] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:17.672 [2024-06-11 15:11:36.288348] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:17.672 [2024-06-11 15:11:36.288360] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:17.672 [2024-06-11 15:11:36.288370] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:17.672 [2024-06-11 15:11:36.288380] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:17.672 [2024-06-11 15:11:36.288416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.672 [2024-06-11 15:11:36.288427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.672 [2024-06-11 15:11:36.288435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.672 [2024-06-11 15:11:36.288808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.672 [2024-06-11 15:11:36.289054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.672 [2024-06-11 15:11:36.289070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x210c9b0 with addr=10.0.0.2, port=4420 00:26:17.672 [2024-06-11 15:11:36.289081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210c9b0 is same with the state(5) to be set 00:26:17.672 [2024-06-11 15:11:36.289403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.672 [2024-06-11 15:11:36.289683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.672 [2024-06-11 15:11:36.289699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2109c50 with addr=10.0.0.2, port=4420 00:26:17.672 [2024-06-11 15:11:36.289709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109c50 is same with the state(5) to be set 00:26:17.672 [2024-06-11 15:11:36.289749] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210c9b0 (9): Bad file descriptor 00:26:17.672 [2024-06-11 15:11:36.289764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2109c50 (9): Bad file descriptor 00:26:17.672 [2024-06-11 15:11:36.289797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:17.672 [2024-06-11 15:11:36.289808] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:17.672 [2024-06-11 15:11:36.289819] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:17.672 [2024-06-11 15:11:36.289831] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.672 [2024-06-11 15:11:36.289841] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.672 [2024-06-11 15:11:36.289851] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.672 [2024-06-11 15:11:36.289887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.672 [2024-06-11 15:11:36.289898] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.931 15:11:36 -- target/shutdown.sh@135 -- # nvmfpid= 00:26:17.931 15:11:36 -- target/shutdown.sh@138 -- # sleep 1 00:26:18.868 15:11:37 -- target/shutdown.sh@141 -- # kill -9 3391548 00:26:18.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3391548) - No such process 00:26:18.868 15:11:37 -- target/shutdown.sh@141 -- # true 00:26:18.868 15:11:37 -- target/shutdown.sh@143 -- # stoptarget 00:26:18.868 15:11:37 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:18.868 15:11:37 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:18.868 15:11:37 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:18.868 15:11:37 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:18.868 15:11:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:18.868 15:11:37 -- nvmf/common.sh@116 -- # sync 00:26:18.868 15:11:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:18.868 15:11:37 -- nvmf/common.sh@119 -- # set +e 00:26:18.868 15:11:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:18.868 15:11:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:18.868 rmmod nvme_tcp 00:26:18.868 rmmod nvme_fabrics 00:26:18.868 rmmod nvme_keyring 00:26:19.128 15:11:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:19.128 15:11:37 -- nvmf/common.sh@123 -- # set -e 00:26:19.128 15:11:37 -- nvmf/common.sh@124 -- # return 0 00:26:19.128 15:11:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:26:19.128 15:11:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:19.128 15:11:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:19.128 15:11:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:19.128 15:11:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.128 15:11:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:19.128 15:11:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.128 15:11:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.128 15:11:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.035 15:11:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:21.035 00:26:21.035 real 0m7.746s 00:26:21.035 user 0m18.941s 00:26:21.035 sys 0m1.301s 00:26:21.035 15:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.035 15:11:39 -- common/autotest_common.sh@10 -- # set +x 00:26:21.035 ************************************ 00:26:21.035 END TEST nvmf_shutdown_tc3 00:26:21.035 ************************************ 00:26:21.035 15:11:39 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:26:21.035 00:26:21.035 real 0m32.324s 00:26:21.035 user 1m19.830s 00:26:21.035 sys 0m9.040s 00:26:21.035 15:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.035 15:11:39 -- common/autotest_common.sh@10 -- # set +x 00:26:21.035 ************************************ 00:26:21.035 END TEST nvmf_shutdown 00:26:21.035 ************************************ 00:26:21.035 15:11:39 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:26:21.035 15:11:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:21.035 15:11:39 -- common/autotest_common.sh@10 -- # set +x 00:26:21.295 15:11:39 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:26:21.295 15:11:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:21.295 15:11:39 -- common/autotest_common.sh@10 -- # set +x 00:26:21.295 15:11:39 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:26:21.295 15:11:39 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:21.295 15:11:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:21.295 15:11:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.295 15:11:39 -- common/autotest_common.sh@10 -- # set +x 00:26:21.295 ************************************ 00:26:21.295 START TEST nvmf_multicontroller 00:26:21.295 ************************************ 00:26:21.295 15:11:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:21.295 * Looking for test storage... 00:26:21.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.295 15:11:39 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.295 15:11:39 -- nvmf/common.sh@7 -- # uname -s 00:26:21.295 15:11:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.295 15:11:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.295 15:11:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.295 15:11:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.295 15:11:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.295 15:11:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.295 15:11:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.295 15:11:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.295 15:11:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.295 15:11:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.295 15:11:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:21.295 15:11:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:21.295 15:11:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.295 15:11:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.295 15:11:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.295 15:11:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.295 15:11:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.295 15:11:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.295 15:11:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.295 15:11:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.295 15:11:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.295 15:11:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.295 15:11:40 -- paths/export.sh@5 -- # export PATH 00:26:21.295 15:11:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.295 15:11:40 -- nvmf/common.sh@46 -- # : 0 00:26:21.295 15:11:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:21.295 15:11:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:21.295 15:11:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:21.295 15:11:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.295 15:11:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.295 15:11:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:21.295 15:11:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:21.295 15:11:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:21.295 15:11:40 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:21.295 15:11:40 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:21.295 15:11:40 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:21.295 15:11:40 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:21.295 15:11:40 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:21.295 15:11:40 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:21.295 15:11:40 -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:21.295 15:11:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:21.295 15:11:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.295 15:11:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:21.295 15:11:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:21.295 15:11:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:21.295 15:11:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.295 15:11:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.295 15:11:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.295 15:11:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:21.295 15:11:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:21.295 15:11:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:21.295 15:11:40 -- common/autotest_common.sh@10 -- # set +x 00:26:27.861 15:11:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:27.861 15:11:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:27.861 15:11:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:27.861 15:11:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:27.861 15:11:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:27.861 15:11:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:27.861 15:11:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:27.861 15:11:45 -- nvmf/common.sh@294 -- # net_devs=() 00:26:27.861 15:11:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:27.861 15:11:45 -- nvmf/common.sh@295 -- # e810=() 00:26:27.861 15:11:45 -- nvmf/common.sh@295 -- # local -ga e810 00:26:27.861 15:11:45 -- nvmf/common.sh@296 -- # x722=() 00:26:27.861 15:11:45 -- nvmf/common.sh@296 -- # local -ga x722 00:26:27.861 15:11:45 -- nvmf/common.sh@297 -- # mlx=() 00:26:27.861 15:11:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:27.861 15:11:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.861 15:11:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:27.861 15:11:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:27.861 15:11:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:27.861 15:11:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:27.861 15:11:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:27.861 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:27.861 15:11:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:27.861 15:11:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:27.861 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:27.861 15:11:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:27.861 15:11:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:27.861 15:11:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.861 15:11:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:27.861 15:11:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.861 15:11:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:27.861 Found net devices under 0000:af:00.0: cvl_0_0 00:26:27.861 15:11:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.861 15:11:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:27.861 15:11:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.861 15:11:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:27.861 15:11:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.861 15:11:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:27.861 Found net devices under 0000:af:00.1: cvl_0_1 00:26:27.861 15:11:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.861 15:11:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:27.861 15:11:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:27.861 15:11:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:27.861 15:11:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.861 15:11:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.861 15:11:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.861 15:11:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:27.861 15:11:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.861 15:11:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.861 15:11:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:27.861 15:11:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.861 15:11:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.861 15:11:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:27.861 15:11:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:27.861 15:11:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.861 15:11:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.861 15:11:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.861 15:11:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.861 15:11:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:27.861 15:11:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.861 15:11:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.861 15:11:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.861 15:11:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:27.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:26:27.861 00:26:27.861 --- 10.0.0.2 ping statistics --- 00:26:27.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.861 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:27.861 15:11:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:26:27.861 00:26:27.861 --- 10.0.0.1 ping statistics --- 00:26:27.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.861 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:26:27.861 15:11:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.861 15:11:45 -- nvmf/common.sh@410 -- # return 0 00:26:27.861 15:11:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:27.861 15:11:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.861 15:11:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:27.861 15:11:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.861 15:11:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:27.861 15:11:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:27.861 15:11:45 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:27.861 15:11:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:27.861 15:11:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:27.861 15:11:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.861 15:11:45 -- nvmf/common.sh@469 -- # nvmfpid=3396117 00:26:27.861 15:11:45 -- nvmf/common.sh@470 -- # waitforlisten 3396117 00:26:27.861 15:11:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:27.861 15:11:45 -- common/autotest_common.sh@819 -- # '[' -z 3396117 ']' 00:26:27.861 15:11:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.861 15:11:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:27.861 15:11:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.861 15:11:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:27.861 15:11:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.861 [2024-06-11 15:11:45.971254] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:27.861 [2024-06-11 15:11:45.971307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.861 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.861 [2024-06-11 15:11:46.058870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:27.861 [2024-06-11 15:11:46.145831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:27.861 [2024-06-11 15:11:46.145976] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.861 [2024-06-11 15:11:46.145987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.861 [2024-06-11 15:11:46.145997] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.861 [2024-06-11 15:11:46.146102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.861 [2024-06-11 15:11:46.146225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.861 [2024-06-11 15:11:46.146225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.122 15:11:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:28.122 15:11:46 -- common/autotest_common.sh@852 -- # return 0 00:26:28.122 15:11:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:28.122 15:11:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:28.122 15:11:46 -- common/autotest_common.sh@10 -- # set +x 00:26:28.122 15:11:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.122 15:11:46 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:28.122 15:11:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.122 15:11:46 -- common/autotest_common.sh@10 -- # set +x 00:26:28.122 [2024-06-11 15:11:46.956629] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.122 15:11:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.122 15:11:46 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:28.122 15:11:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.122 15:11:46 -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 Malloc0 00:26:28.381 15:11:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.381 15:11:47 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:28.381 15:11:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.381 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 15:11:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.381 15:11:47 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:28.381 15:11:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.381 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 15:11:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.381 15:11:47 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:28.381 15:11:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.381 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 [2024-06-11 15:11:47.029375] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.381 15:11:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.381 15:11:47 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:28.381 15:11:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.381 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 [2024-06-11 15:11:47.037328] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:28.381 15:11:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.381 15:11:47 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:28.381 15:11:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.381 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 Malloc1 00:26:28.381 15:11:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.381 15:11:47 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:28.381 15:11:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.381 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 15:11:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.381 15:11:47 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:28.381 15:11:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.381 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 15:11:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.381 15:11:47 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:28.381 15:11:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.381 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 15:11:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.381 15:11:47 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:28.381 15:11:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.381 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 15:11:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.381 15:11:47 -- host/multicontroller.sh@44 -- # bdevperf_pid=3396316 00:26:28.381 15:11:47 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:28.381 15:11:47 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:28.381 15:11:47 -- host/multicontroller.sh@47 -- # waitforlisten 3396316 /var/tmp/bdevperf.sock 00:26:28.381 15:11:47 -- common/autotest_common.sh@819 -- # '[' -z 3396316 ']' 00:26:28.381 15:11:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:28.381 15:11:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.381 15:11:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:28.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:28.381 15:11:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.381 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:26:29.316 15:11:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.316 15:11:48 -- common/autotest_common.sh@852 -- # return 0 00:26:29.316 15:11:48 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:29.316 15:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.316 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.316 NVMe0n1 00:26:29.316 15:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.316 15:11:48 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:29.316 15:11:48 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:29.316 15:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.316 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.316 15:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.316 1 00:26:29.316 15:11:48 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:29.316 15:11:48 -- common/autotest_common.sh@640 -- # local es=0 00:26:29.316 15:11:48 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:29.316 15:11:48 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:29.316 15:11:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.316 15:11:48 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:29.316 15:11:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.316 15:11:48 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:29.316 15:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.316 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.316 request: 00:26:29.316 { 00:26:29.316 "name": "NVMe0", 00:26:29.316 "trtype": "tcp", 00:26:29.316 "traddr": "10.0.0.2", 00:26:29.316 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:29.316 "hostaddr": "10.0.0.2", 00:26:29.316 "hostsvcid": "60000", 00:26:29.316 "adrfam": "ipv4", 00:26:29.316 "trsvcid": "4420", 00:26:29.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.316 "method": "bdev_nvme_attach_controller", 00:26:29.316 "req_id": 1 00:26:29.316 } 00:26:29.316 Got JSON-RPC error response 00:26:29.316 response: 00:26:29.316 { 00:26:29.316 "code": -114, 00:26:29.316 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:29.316 } 00:26:29.316 15:11:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:29.316 15:11:48 -- common/autotest_common.sh@643 -- # es=1 00:26:29.316 15:11:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:29.316 15:11:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:29.316 15:11:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:29.316 15:11:48 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:29.316 15:11:48 -- common/autotest_common.sh@640 -- # local es=0 00:26:29.316 15:11:48 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:29.316 15:11:48 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:29.316 15:11:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.316 15:11:48 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:29.316 15:11:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.316 15:11:48 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:29.316 15:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.575 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.575 request: 00:26:29.575 { 00:26:29.575 "name": "NVMe0", 00:26:29.575 "trtype": "tcp", 00:26:29.575 "traddr": "10.0.0.2", 00:26:29.575 "hostaddr": "10.0.0.2", 00:26:29.575 "hostsvcid": "60000", 00:26:29.575 "adrfam": "ipv4", 00:26:29.575 "trsvcid": "4420", 00:26:29.575 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:29.575 "method": "bdev_nvme_attach_controller", 00:26:29.575 "req_id": 1 00:26:29.575 } 00:26:29.575 Got JSON-RPC error response 00:26:29.575 response: 00:26:29.575 { 00:26:29.575 "code": -114, 00:26:29.575 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:29.575 } 00:26:29.575 15:11:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:29.575 15:11:48 -- common/autotest_common.sh@643 -- # es=1 00:26:29.575 15:11:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:29.575 15:11:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:29.575 15:11:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:29.575 15:11:48 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:29.575 15:11:48 -- common/autotest_common.sh@640 -- # local es=0 00:26:29.575 15:11:48 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:29.575 15:11:48 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:29.575 15:11:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.575 15:11:48 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:29.575 15:11:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.575 15:11:48 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:29.575 15:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.575 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.575 request: 00:26:29.575 { 00:26:29.575 "name": "NVMe0", 00:26:29.575 "trtype": "tcp", 00:26:29.575 "traddr": "10.0.0.2", 00:26:29.575 "hostaddr": "10.0.0.2", 00:26:29.575 "hostsvcid": "60000", 00:26:29.575 "adrfam": "ipv4", 00:26:29.575 "trsvcid": "4420", 00:26:29.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.575 "multipath": "disable", 00:26:29.575 "method": "bdev_nvme_attach_controller", 00:26:29.575 "req_id": 1 00:26:29.575 } 00:26:29.575 Got JSON-RPC error response 00:26:29.575 response: 00:26:29.575 { 00:26:29.575 "code": -114, 00:26:29.575 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:29.575 } 00:26:29.575 15:11:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:29.575 15:11:48 -- common/autotest_common.sh@643 -- # es=1 00:26:29.575 15:11:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:29.575 15:11:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:29.575 15:11:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:29.575 15:11:48 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:29.575 15:11:48 -- common/autotest_common.sh@640 -- # local es=0 00:26:29.575 15:11:48 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:29.575 15:11:48 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:29.575 15:11:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.575 15:11:48 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:29.575 15:11:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.575 15:11:48 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:29.575 15:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.575 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.575 request: 00:26:29.575 { 00:26:29.575 "name": "NVMe0", 00:26:29.575 "trtype": "tcp", 00:26:29.575 "traddr": "10.0.0.2", 00:26:29.575 "hostaddr": "10.0.0.2", 00:26:29.575 "hostsvcid": "60000", 00:26:29.575 "adrfam": "ipv4", 00:26:29.575 "trsvcid": "4420", 00:26:29.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.575 "multipath": "failover", 00:26:29.575 "method": "bdev_nvme_attach_controller", 00:26:29.575 "req_id": 1 00:26:29.575 } 00:26:29.575 Got JSON-RPC error response 00:26:29.575 response: 00:26:29.575 { 00:26:29.575 "code": -114, 00:26:29.575 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:29.575 } 00:26:29.575 15:11:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:29.575 15:11:48 -- common/autotest_common.sh@643 -- # es=1 00:26:29.575 15:11:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:29.575 15:11:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:29.575 15:11:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:29.575 15:11:48 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:29.575 15:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.575 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.834 00:26:29.834 15:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.834 15:11:48 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:29.834 15:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.834 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.834 15:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.834 15:11:48 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:29.834 15:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.834 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.834 00:26:29.834 15:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.834 15:11:48 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:29.834 15:11:48 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:29.834 15:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.834 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.834 15:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.834 15:11:48 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:29.834 15:11:48 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:31.210 0 00:26:31.210 15:11:49 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:31.210 15:11:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:31.210 15:11:49 -- common/autotest_common.sh@10 -- # set +x 00:26:31.210 15:11:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:31.210 15:11:49 -- host/multicontroller.sh@100 -- # killprocess 3396316 00:26:31.210 15:11:49 -- common/autotest_common.sh@926 -- # '[' -z 3396316 ']' 00:26:31.210 15:11:49 -- common/autotest_common.sh@930 -- # kill -0 3396316 00:26:31.210 15:11:49 -- common/autotest_common.sh@931 -- # uname 00:26:31.210 15:11:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:31.210 15:11:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3396316 00:26:31.210 15:11:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:31.210 15:11:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:31.210 15:11:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3396316' 00:26:31.210 killing process with pid 3396316 00:26:31.210 15:11:49 -- common/autotest_common.sh@945 -- # kill 3396316 00:26:31.210 15:11:49 -- common/autotest_common.sh@950 -- # wait 3396316 00:26:31.469 15:11:50 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:31.469 15:11:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:31.469 15:11:50 -- common/autotest_common.sh@10 -- # set +x 00:26:31.469 15:11:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:31.469 15:11:50 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:31.469 15:11:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:31.470 15:11:50 -- common/autotest_common.sh@10 -- # set +x 00:26:31.470 15:11:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:31.470 15:11:50 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:31.470 15:11:50 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:31.470 15:11:50 -- common/autotest_common.sh@1597 -- # read -r file 00:26:31.470 15:11:50 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:31.470 15:11:50 -- common/autotest_common.sh@1596 -- # sort -u 00:26:31.470 15:11:50 -- common/autotest_common.sh@1598 -- # cat 00:26:31.470 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:31.470 [2024-06-11 15:11:47.139214] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:31.470 [2024-06-11 15:11:47.139279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396316 ] 00:26:31.470 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.470 [2024-06-11 15:11:47.229263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.470 [2024-06-11 15:11:47.316955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.470 [2024-06-11 15:11:48.638522] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 63075dfc-d9c2-4d69-ab99-ea84b3072c8c already exists 00:26:31.470 [2024-06-11 15:11:48.638558] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:63075dfc-d9c2-4d69-ab99-ea84b3072c8c alias for bdev NVMe1n1 00:26:31.470 [2024-06-11 15:11:48.638571] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:31.470 Running I/O for 1 seconds... 00:26:31.470 00:26:31.470 Latency(us) 00:26:31.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.470 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:31.470 NVMe0n1 : 1.01 17026.55 66.51 0.00 0.00 7495.53 4766.25 16681.89 00:26:31.470 =================================================================================================================== 00:26:31.470 Total : 17026.55 66.51 0.00 0.00 7495.53 4766.25 16681.89 00:26:31.470 Received shutdown signal, test time was about 1.000000 seconds 00:26:31.470 00:26:31.470 Latency(us) 00:26:31.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.470 =================================================================================================================== 00:26:31.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.470 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:31.470 15:11:50 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:31.470 15:11:50 -- common/autotest_common.sh@1597 -- # read -r file 00:26:31.470 15:11:50 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:31.470 15:11:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:31.470 15:11:50 -- nvmf/common.sh@116 -- # sync 00:26:31.470 15:11:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:31.470 15:11:50 -- nvmf/common.sh@119 -- # set +e 00:26:31.470 15:11:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:31.470 15:11:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:31.470 rmmod nvme_tcp 00:26:31.470 rmmod nvme_fabrics 00:26:31.470 rmmod nvme_keyring 00:26:31.470 15:11:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:31.470 15:11:50 -- nvmf/common.sh@123 -- # set -e 00:26:31.470 15:11:50 -- nvmf/common.sh@124 -- # return 0 00:26:31.470 15:11:50 -- nvmf/common.sh@477 -- # '[' -n 3396117 ']' 00:26:31.470 15:11:50 -- nvmf/common.sh@478 -- # killprocess 3396117 00:26:31.470 15:11:50 -- common/autotest_common.sh@926 -- # '[' -z 3396117 ']' 00:26:31.470 15:11:50 -- common/autotest_common.sh@930 -- # kill -0 3396117 00:26:31.470 15:11:50 -- common/autotest_common.sh@931 -- # uname 00:26:31.470 15:11:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:31.470 15:11:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3396117 00:26:31.470 15:11:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:31.470 15:11:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:31.470 15:11:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3396117' 00:26:31.470 killing process with pid 3396117 00:26:31.470 15:11:50 -- common/autotest_common.sh@945 -- # kill 3396117 00:26:31.470 15:11:50 -- common/autotest_common.sh@950 -- # wait 3396117 00:26:31.729 15:11:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:31.729 15:11:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:31.729 15:11:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:31.729 15:11:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:31.729 15:11:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:31.729 15:11:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.729 15:11:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.729 15:11:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.266 15:11:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:34.266 00:26:34.266 real 0m12.664s 00:26:34.266 user 0m17.814s 00:26:34.266 sys 0m5.227s 00:26:34.266 15:11:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.266 15:11:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.266 ************************************ 00:26:34.266 END TEST nvmf_multicontroller 00:26:34.266 ************************************ 00:26:34.266 15:11:52 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:34.266 15:11:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:34.266 15:11:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:34.266 15:11:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.266 ************************************ 00:26:34.266 START TEST nvmf_aer 00:26:34.266 ************************************ 00:26:34.266 15:11:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:34.266 * Looking for test storage... 00:26:34.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:34.267 15:11:52 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.267 15:11:52 -- nvmf/common.sh@7 -- # uname -s 00:26:34.267 15:11:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.267 15:11:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.267 15:11:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.267 15:11:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.267 15:11:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.267 15:11:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.267 15:11:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.267 15:11:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.267 15:11:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.267 15:11:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.267 15:11:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:34.267 15:11:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:34.267 15:11:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.267 15:11:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.267 15:11:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.267 15:11:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.267 15:11:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.267 15:11:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.267 15:11:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.267 15:11:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.267 15:11:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.267 15:11:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.267 15:11:52 -- paths/export.sh@5 -- # export PATH 00:26:34.267 15:11:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.267 15:11:52 -- nvmf/common.sh@46 -- # : 0 00:26:34.267 15:11:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:34.267 15:11:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:34.267 15:11:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:34.267 15:11:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.267 15:11:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.267 15:11:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:34.267 15:11:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:34.267 15:11:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:34.267 15:11:52 -- host/aer.sh@11 -- # nvmftestinit 00:26:34.267 15:11:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:34.267 15:11:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.267 15:11:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:34.267 15:11:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:34.267 15:11:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:34.267 15:11:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.267 15:11:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.267 15:11:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.267 15:11:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:34.267 15:11:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:34.267 15:11:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:34.267 15:11:52 -- common/autotest_common.sh@10 -- # set +x 00:26:40.834 15:11:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:40.834 15:11:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:40.834 15:11:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:40.834 15:11:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:40.834 15:11:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:40.834 15:11:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:40.834 15:11:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:40.834 15:11:58 -- nvmf/common.sh@294 -- # net_devs=() 00:26:40.834 15:11:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:40.834 15:11:58 -- nvmf/common.sh@295 -- # e810=() 00:26:40.834 15:11:58 -- nvmf/common.sh@295 -- # local -ga e810 00:26:40.834 15:11:58 -- nvmf/common.sh@296 -- # x722=() 00:26:40.834 15:11:58 -- nvmf/common.sh@296 -- # local -ga x722 00:26:40.834 15:11:58 -- nvmf/common.sh@297 -- # mlx=() 00:26:40.834 15:11:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:40.834 15:11:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.834 15:11:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:40.834 15:11:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:40.834 15:11:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:40.834 15:11:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:40.834 15:11:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:40.834 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:40.834 15:11:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:40.834 15:11:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:40.834 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:40.834 15:11:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:40.834 15:11:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:40.834 15:11:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.834 15:11:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:40.834 15:11:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.834 15:11:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:40.834 Found net devices under 0000:af:00.0: cvl_0_0 00:26:40.834 15:11:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.834 15:11:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:40.834 15:11:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.834 15:11:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:40.834 15:11:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.834 15:11:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:40.834 Found net devices under 0000:af:00.1: cvl_0_1 00:26:40.834 15:11:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.834 15:11:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:40.834 15:11:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:40.834 15:11:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:40.834 15:11:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:40.834 15:11:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.834 15:11:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.834 15:11:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.834 15:11:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:40.834 15:11:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.834 15:11:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.834 15:11:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:40.834 15:11:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.834 15:11:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.834 15:11:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:40.834 15:11:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:40.834 15:11:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.834 15:11:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.834 15:11:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.834 15:11:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.834 15:11:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:40.834 15:11:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.834 15:11:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.834 15:11:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.834 15:11:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:40.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:26:40.834 00:26:40.834 --- 10.0.0.2 ping statistics --- 00:26:40.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.834 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:40.834 15:11:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:26:40.834 00:26:40.834 --- 10.0.0.1 ping statistics --- 00:26:40.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.834 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:26:40.834 15:11:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.834 15:11:59 -- nvmf/common.sh@410 -- # return 0 00:26:40.834 15:11:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:40.834 15:11:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.834 15:11:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:40.834 15:11:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:40.834 15:11:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.834 15:11:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:40.834 15:11:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:40.834 15:11:59 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:40.834 15:11:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:40.834 15:11:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:40.834 15:11:59 -- common/autotest_common.sh@10 -- # set +x 00:26:40.834 15:11:59 -- nvmf/common.sh@469 -- # nvmfpid=3400894 00:26:40.834 15:11:59 -- nvmf/common.sh@470 -- # waitforlisten 3400894 00:26:40.834 15:11:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:40.834 15:11:59 -- common/autotest_common.sh@819 -- # '[' -z 3400894 ']' 00:26:40.834 15:11:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.834 15:11:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:40.834 15:11:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.834 15:11:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:40.834 15:11:59 -- common/autotest_common.sh@10 -- # set +x 00:26:40.834 [2024-06-11 15:11:59.269083] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:40.834 [2024-06-11 15:11:59.269139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.834 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.834 [2024-06-11 15:11:59.367279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.834 [2024-06-11 15:11:59.457495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:40.834 [2024-06-11 15:11:59.457636] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.834 [2024-06-11 15:11:59.457649] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.834 [2024-06-11 15:11:59.457658] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.835 [2024-06-11 15:11:59.457707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.835 [2024-06-11 15:11:59.457807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.835 [2024-06-11 15:11:59.457933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.835 [2024-06-11 15:11:59.457934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.401 15:12:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:41.401 15:12:00 -- common/autotest_common.sh@852 -- # return 0 00:26:41.401 15:12:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:41.401 15:12:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:41.401 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.660 15:12:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.660 15:12:00 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:41.660 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.660 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.660 [2024-06-11 15:12:00.250808] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.660 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.660 15:12:00 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:41.660 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.660 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.660 Malloc0 00:26:41.660 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.660 15:12:00 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:41.660 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.660 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.660 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.660 15:12:00 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:41.660 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.660 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.660 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.660 15:12:00 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.660 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.660 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.661 [2024-06-11 15:12:00.306503] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.661 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.661 15:12:00 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:41.661 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.661 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.661 [2024-06-11 15:12:00.314246] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:41.661 [ 00:26:41.661 { 00:26:41.661 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:41.661 "subtype": "Discovery", 00:26:41.661 "listen_addresses": [], 00:26:41.661 "allow_any_host": true, 00:26:41.661 "hosts": [] 00:26:41.661 }, 00:26:41.661 { 00:26:41.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:41.661 "subtype": "NVMe", 00:26:41.661 "listen_addresses": [ 00:26:41.661 { 00:26:41.661 "transport": "TCP", 00:26:41.661 "trtype": "TCP", 00:26:41.661 "adrfam": "IPv4", 00:26:41.661 "traddr": "10.0.0.2", 00:26:41.661 "trsvcid": "4420" 00:26:41.661 } 00:26:41.661 ], 00:26:41.661 "allow_any_host": true, 00:26:41.661 "hosts": [], 00:26:41.661 "serial_number": "SPDK00000000000001", 00:26:41.661 "model_number": "SPDK bdev Controller", 00:26:41.661 "max_namespaces": 2, 00:26:41.661 "min_cntlid": 1, 00:26:41.661 "max_cntlid": 65519, 00:26:41.661 "namespaces": [ 00:26:41.661 { 00:26:41.661 "nsid": 1, 00:26:41.661 "bdev_name": "Malloc0", 00:26:41.661 "name": "Malloc0", 00:26:41.661 "nguid": "7ECEAB9CF46E4A50B0AD16E25D28A866", 00:26:41.661 "uuid": "7eceab9c-f46e-4a50-b0ad-16e25d28a866" 00:26:41.661 } 00:26:41.661 ] 00:26:41.661 } 00:26:41.661 ] 00:26:41.661 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.661 15:12:00 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:41.661 15:12:00 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:41.661 15:12:00 -- host/aer.sh@33 -- # aerpid=3401205 00:26:41.661 15:12:00 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:41.661 15:12:00 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:41.661 15:12:00 -- common/autotest_common.sh@1244 -- # local i=0 00:26:41.661 15:12:00 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:41.661 15:12:00 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:26:41.661 15:12:00 -- common/autotest_common.sh@1247 -- # i=1 00:26:41.661 15:12:00 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:41.661 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.661 15:12:00 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:41.661 15:12:00 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:26:41.661 15:12:00 -- common/autotest_common.sh@1247 -- # i=2 00:26:41.661 15:12:00 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:41.920 15:12:00 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:41.920 15:12:00 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:41.920 15:12:00 -- common/autotest_common.sh@1255 -- # return 0 00:26:41.920 15:12:00 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:41.920 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.920 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.920 Malloc1 00:26:41.920 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.920 15:12:00 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:41.920 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.920 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.920 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.920 15:12:00 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:41.920 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.920 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.920 Asynchronous Event Request test 00:26:41.920 Attaching to 10.0.0.2 00:26:41.920 Attached to 10.0.0.2 00:26:41.920 Registering asynchronous event callbacks... 00:26:41.920 Starting namespace attribute notice tests for all controllers... 00:26:41.920 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:41.920 aer_cb - Changed Namespace 00:26:41.920 Cleaning up... 00:26:41.920 [ 00:26:41.920 { 00:26:41.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:41.920 "subtype": "Discovery", 00:26:41.920 "listen_addresses": [], 00:26:41.920 "allow_any_host": true, 00:26:41.920 "hosts": [] 00:26:41.920 }, 00:26:41.920 { 00:26:41.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:41.920 "subtype": "NVMe", 00:26:41.920 "listen_addresses": [ 00:26:41.920 { 00:26:41.920 "transport": "TCP", 00:26:41.920 "trtype": "TCP", 00:26:41.920 "adrfam": "IPv4", 00:26:41.920 "traddr": "10.0.0.2", 00:26:41.920 "trsvcid": "4420" 00:26:41.920 } 00:26:41.920 ], 00:26:41.920 "allow_any_host": true, 00:26:41.920 "hosts": [], 00:26:41.920 "serial_number": "SPDK00000000000001", 00:26:41.920 "model_number": "SPDK bdev Controller", 00:26:41.920 "max_namespaces": 2, 00:26:41.920 "min_cntlid": 1, 00:26:41.920 "max_cntlid": 65519, 00:26:41.920 "namespaces": [ 00:26:41.920 { 00:26:41.920 "nsid": 1, 00:26:41.920 "bdev_name": "Malloc0", 00:26:41.920 "name": "Malloc0", 00:26:41.920 "nguid": "7ECEAB9CF46E4A50B0AD16E25D28A866", 00:26:41.920 "uuid": "7eceab9c-f46e-4a50-b0ad-16e25d28a866" 00:26:41.920 }, 00:26:41.920 { 00:26:41.920 "nsid": 2, 00:26:41.920 "bdev_name": "Malloc1", 00:26:41.920 "name": "Malloc1", 00:26:41.920 "nguid": "9C754EBEECBB4063A110FB08F86CEA80", 00:26:41.920 "uuid": "9c754ebe-ecbb-4063-a110-fb08f86cea80" 00:26:41.920 } 00:26:41.920 ] 00:26:41.920 } 00:26:41.920 ] 00:26:41.920 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.920 15:12:00 -- host/aer.sh@43 -- # wait 3401205 00:26:41.920 15:12:00 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:41.920 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.920 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.920 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.920 15:12:00 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:41.920 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.920 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.920 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.920 15:12:00 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:41.920 15:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.920 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:41.920 15:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.920 15:12:00 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:41.920 15:12:00 -- host/aer.sh@51 -- # nvmftestfini 00:26:41.920 15:12:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:41.920 15:12:00 -- nvmf/common.sh@116 -- # sync 00:26:41.920 15:12:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:41.920 15:12:00 -- nvmf/common.sh@119 -- # set +e 00:26:41.920 15:12:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:41.920 15:12:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:41.920 rmmod nvme_tcp 00:26:41.920 rmmod nvme_fabrics 00:26:41.920 rmmod nvme_keyring 00:26:41.920 15:12:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:41.920 15:12:00 -- nvmf/common.sh@123 -- # set -e 00:26:41.920 15:12:00 -- nvmf/common.sh@124 -- # return 0 00:26:41.920 15:12:00 -- nvmf/common.sh@477 -- # '[' -n 3400894 ']' 00:26:41.920 15:12:00 -- nvmf/common.sh@478 -- # killprocess 3400894 00:26:41.920 15:12:00 -- common/autotest_common.sh@926 -- # '[' -z 3400894 ']' 00:26:41.920 15:12:00 -- common/autotest_common.sh@930 -- # kill -0 3400894 00:26:41.920 15:12:00 -- common/autotest_common.sh@931 -- # uname 00:26:41.920 15:12:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:41.920 15:12:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3400894 00:26:42.180 15:12:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:42.180 15:12:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:42.180 15:12:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3400894' 00:26:42.180 killing process with pid 3400894 00:26:42.181 15:12:00 -- common/autotest_common.sh@945 -- # kill 3400894 00:26:42.181 [2024-06-11 15:12:00.780856] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:42.181 15:12:00 -- common/autotest_common.sh@950 -- # wait 3400894 00:26:42.181 15:12:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:42.181 15:12:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:42.181 15:12:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:42.181 15:12:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.181 15:12:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:42.181 15:12:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.181 15:12:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.181 15:12:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.741 15:12:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:44.741 00:26:44.741 real 0m10.463s 00:26:44.741 user 0m8.029s 00:26:44.741 sys 0m5.374s 00:26:44.741 15:12:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.741 15:12:03 -- common/autotest_common.sh@10 -- # set +x 00:26:44.741 ************************************ 00:26:44.741 END TEST nvmf_aer 00:26:44.741 ************************************ 00:26:44.741 15:12:03 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:44.741 15:12:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:44.741 15:12:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:44.741 15:12:03 -- common/autotest_common.sh@10 -- # set +x 00:26:44.741 ************************************ 00:26:44.741 START TEST nvmf_async_init 00:26:44.741 ************************************ 00:26:44.741 15:12:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:44.741 * Looking for test storage... 00:26:44.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.741 15:12:03 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.741 15:12:03 -- nvmf/common.sh@7 -- # uname -s 00:26:44.741 15:12:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.741 15:12:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.741 15:12:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.741 15:12:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.741 15:12:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.741 15:12:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.741 15:12:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.741 15:12:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.741 15:12:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.741 15:12:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.741 15:12:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:44.741 15:12:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:44.741 15:12:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.741 15:12:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.741 15:12:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.741 15:12:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.741 15:12:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.741 15:12:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.741 15:12:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.741 15:12:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.741 15:12:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.741 15:12:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.741 15:12:03 -- paths/export.sh@5 -- # export PATH 00:26:44.741 15:12:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.741 15:12:03 -- nvmf/common.sh@46 -- # : 0 00:26:44.741 15:12:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:44.741 15:12:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:44.741 15:12:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:44.741 15:12:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.741 15:12:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.741 15:12:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:44.741 15:12:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:44.741 15:12:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:44.741 15:12:03 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:44.741 15:12:03 -- host/async_init.sh@14 -- # null_block_size=512 00:26:44.741 15:12:03 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:44.741 15:12:03 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:44.741 15:12:03 -- host/async_init.sh@20 -- # uuidgen 00:26:44.741 15:12:03 -- host/async_init.sh@20 -- # tr -d - 00:26:44.741 15:12:03 -- host/async_init.sh@20 -- # nguid=549c2f6bfd7946cb88ffc26a48fa4647 00:26:44.741 15:12:03 -- host/async_init.sh@22 -- # nvmftestinit 00:26:44.741 15:12:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:44.741 15:12:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.741 15:12:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:44.741 15:12:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:44.741 15:12:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:44.741 15:12:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.741 15:12:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.741 15:12:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.741 15:12:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:44.741 15:12:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:44.741 15:12:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:44.741 15:12:03 -- common/autotest_common.sh@10 -- # set +x 00:26:51.303 15:12:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:51.303 15:12:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:51.303 15:12:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:51.303 15:12:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:51.303 15:12:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:51.303 15:12:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:51.303 15:12:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:51.303 15:12:09 -- nvmf/common.sh@294 -- # net_devs=() 00:26:51.303 15:12:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:51.303 15:12:09 -- nvmf/common.sh@295 -- # e810=() 00:26:51.303 15:12:09 -- nvmf/common.sh@295 -- # local -ga e810 00:26:51.303 15:12:09 -- nvmf/common.sh@296 -- # x722=() 00:26:51.303 15:12:09 -- nvmf/common.sh@296 -- # local -ga x722 00:26:51.303 15:12:09 -- nvmf/common.sh@297 -- # mlx=() 00:26:51.303 15:12:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:51.303 15:12:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.303 15:12:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:51.303 15:12:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:51.303 15:12:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:51.303 15:12:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:51.303 15:12:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:51.303 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:51.303 15:12:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:51.303 15:12:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:51.303 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:51.303 15:12:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:51.303 15:12:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:51.303 15:12:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.303 15:12:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:51.303 15:12:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.303 15:12:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:51.303 Found net devices under 0000:af:00.0: cvl_0_0 00:26:51.303 15:12:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.303 15:12:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:51.303 15:12:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.303 15:12:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:51.303 15:12:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.303 15:12:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:51.303 Found net devices under 0000:af:00.1: cvl_0_1 00:26:51.303 15:12:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.303 15:12:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:51.303 15:12:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:51.303 15:12:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:51.303 15:12:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.303 15:12:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.303 15:12:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.303 15:12:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:51.303 15:12:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.303 15:12:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.303 15:12:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:51.303 15:12:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.303 15:12:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.303 15:12:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:51.303 15:12:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:51.303 15:12:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.303 15:12:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.303 15:12:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.303 15:12:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.303 15:12:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:51.303 15:12:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.303 15:12:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.303 15:12:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.303 15:12:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:51.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:26:51.303 00:26:51.303 --- 10.0.0.2 ping statistics --- 00:26:51.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.303 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:26:51.303 15:12:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:26:51.303 00:26:51.303 --- 10.0.0.1 ping statistics --- 00:26:51.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.303 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:51.303 15:12:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.303 15:12:09 -- nvmf/common.sh@410 -- # return 0 00:26:51.303 15:12:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:51.303 15:12:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.303 15:12:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:51.303 15:12:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.303 15:12:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:51.303 15:12:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:51.303 15:12:09 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:51.303 15:12:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:51.303 15:12:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:51.303 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.303 15:12:09 -- nvmf/common.sh@469 -- # nvmfpid=3405664 00:26:51.303 15:12:09 -- nvmf/common.sh@470 -- # waitforlisten 3405664 00:26:51.303 15:12:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:51.303 15:12:09 -- common/autotest_common.sh@819 -- # '[' -z 3405664 ']' 00:26:51.303 15:12:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.303 15:12:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:51.303 15:12:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.303 15:12:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:51.303 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.304 [2024-06-11 15:12:09.746120] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:51.304 [2024-06-11 15:12:09.746177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.304 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.304 [2024-06-11 15:12:09.840185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.304 [2024-06-11 15:12:09.928123] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:51.304 [2024-06-11 15:12:09.928267] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.304 [2024-06-11 15:12:09.928279] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.304 [2024-06-11 15:12:09.928289] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.304 [2024-06-11 15:12:09.928311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.870 15:12:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:51.870 15:12:10 -- common/autotest_common.sh@852 -- # return 0 00:26:51.870 15:12:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:51.870 15:12:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:51.870 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:51.870 15:12:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.870 15:12:10 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:51.870 15:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.870 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.128 [2024-06-11 15:12:10.714174] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.128 15:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.128 15:12:10 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:52.128 15:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.128 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.128 null0 00:26:52.128 15:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.128 15:12:10 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:52.128 15:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.128 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.128 15:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.128 15:12:10 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:52.128 15:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.128 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.128 15:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.128 15:12:10 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 549c2f6bfd7946cb88ffc26a48fa4647 00:26:52.128 15:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.128 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.128 15:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.128 15:12:10 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:52.128 15:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.128 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.128 [2024-06-11 15:12:10.754406] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.128 15:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.128 15:12:10 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:52.128 15:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.128 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.387 nvme0n1 00:26:52.387 15:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.387 15:12:10 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:52.387 15:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.387 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.387 [ 00:26:52.387 { 00:26:52.387 "name": "nvme0n1", 00:26:52.387 "aliases": [ 00:26:52.387 "549c2f6b-fd79-46cb-88ff-c26a48fa4647" 00:26:52.387 ], 00:26:52.387 "product_name": "NVMe disk", 00:26:52.387 "block_size": 512, 00:26:52.387 "num_blocks": 2097152, 00:26:52.387 "uuid": "549c2f6b-fd79-46cb-88ff-c26a48fa4647", 00:26:52.387 "assigned_rate_limits": { 00:26:52.387 "rw_ios_per_sec": 0, 00:26:52.387 "rw_mbytes_per_sec": 0, 00:26:52.387 "r_mbytes_per_sec": 0, 00:26:52.387 "w_mbytes_per_sec": 0 00:26:52.387 }, 00:26:52.387 "claimed": false, 00:26:52.387 "zoned": false, 00:26:52.387 "supported_io_types": { 00:26:52.387 "read": true, 00:26:52.387 "write": true, 00:26:52.387 "unmap": false, 00:26:52.387 "write_zeroes": true, 00:26:52.387 "flush": true, 00:26:52.387 "reset": true, 00:26:52.387 "compare": true, 00:26:52.387 "compare_and_write": true, 00:26:52.387 "abort": true, 00:26:52.387 "nvme_admin": true, 00:26:52.387 "nvme_io": true 00:26:52.387 }, 00:26:52.387 "driver_specific": { 00:26:52.387 "nvme": [ 00:26:52.387 { 00:26:52.387 "trid": { 00:26:52.387 "trtype": "TCP", 00:26:52.387 "adrfam": "IPv4", 00:26:52.387 "traddr": "10.0.0.2", 00:26:52.387 "trsvcid": "4420", 00:26:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:52.387 }, 00:26:52.387 "ctrlr_data": { 00:26:52.387 "cntlid": 1, 00:26:52.387 "vendor_id": "0x8086", 00:26:52.387 "model_number": "SPDK bdev Controller", 00:26:52.387 "serial_number": "00000000000000000000", 00:26:52.387 "firmware_revision": "24.01.1", 00:26:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:52.387 "oacs": { 00:26:52.387 "security": 0, 00:26:52.387 "format": 0, 00:26:52.387 "firmware": 0, 00:26:52.387 "ns_manage": 0 00:26:52.387 }, 00:26:52.387 "multi_ctrlr": true, 00:26:52.387 "ana_reporting": false 00:26:52.387 }, 00:26:52.387 "vs": { 00:26:52.387 "nvme_version": "1.3" 00:26:52.387 }, 00:26:52.387 "ns_data": { 00:26:52.387 "id": 1, 00:26:52.387 "can_share": true 00:26:52.387 } 00:26:52.387 } 00:26:52.387 ], 00:26:52.387 "mp_policy": "active_passive" 00:26:52.387 } 00:26:52.387 } 00:26:52.387 ] 00:26:52.387 15:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.387 15:12:10 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:52.387 15:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.387 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.387 [2024-06-11 15:12:10.998933] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.387 [2024-06-11 15:12:10.999004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b15820 (9): Bad file descriptor 00:26:52.387 [2024-06-11 15:12:11.131145] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:52.387 15:12:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.387 15:12:11 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:52.387 15:12:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.387 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.387 [ 00:26:52.387 { 00:26:52.387 "name": "nvme0n1", 00:26:52.387 "aliases": [ 00:26:52.387 "549c2f6b-fd79-46cb-88ff-c26a48fa4647" 00:26:52.387 ], 00:26:52.387 "product_name": "NVMe disk", 00:26:52.387 "block_size": 512, 00:26:52.387 "num_blocks": 2097152, 00:26:52.387 "uuid": "549c2f6b-fd79-46cb-88ff-c26a48fa4647", 00:26:52.387 "assigned_rate_limits": { 00:26:52.387 "rw_ios_per_sec": 0, 00:26:52.387 "rw_mbytes_per_sec": 0, 00:26:52.387 "r_mbytes_per_sec": 0, 00:26:52.387 "w_mbytes_per_sec": 0 00:26:52.387 }, 00:26:52.387 "claimed": false, 00:26:52.387 "zoned": false, 00:26:52.387 "supported_io_types": { 00:26:52.387 "read": true, 00:26:52.387 "write": true, 00:26:52.387 "unmap": false, 00:26:52.387 "write_zeroes": true, 00:26:52.387 "flush": true, 00:26:52.387 "reset": true, 00:26:52.387 "compare": true, 00:26:52.387 "compare_and_write": true, 00:26:52.387 "abort": true, 00:26:52.387 "nvme_admin": true, 00:26:52.387 "nvme_io": true 00:26:52.387 }, 00:26:52.387 "driver_specific": { 00:26:52.387 "nvme": [ 00:26:52.387 { 00:26:52.387 "trid": { 00:26:52.387 "trtype": "TCP", 00:26:52.387 "adrfam": "IPv4", 00:26:52.387 "traddr": "10.0.0.2", 00:26:52.387 "trsvcid": "4420", 00:26:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:52.387 }, 00:26:52.387 "ctrlr_data": { 00:26:52.387 "cntlid": 2, 00:26:52.387 "vendor_id": "0x8086", 00:26:52.387 "model_number": "SPDK bdev Controller", 00:26:52.387 "serial_number": "00000000000000000000", 00:26:52.387 "firmware_revision": "24.01.1", 00:26:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:52.387 "oacs": { 00:26:52.387 "security": 0, 00:26:52.387 "format": 0, 00:26:52.387 "firmware": 0, 00:26:52.387 "ns_manage": 0 00:26:52.387 }, 00:26:52.387 "multi_ctrlr": true, 00:26:52.387 "ana_reporting": false 00:26:52.387 }, 00:26:52.387 "vs": { 00:26:52.387 "nvme_version": "1.3" 00:26:52.387 }, 00:26:52.387 "ns_data": { 00:26:52.387 "id": 1, 00:26:52.387 "can_share": true 00:26:52.387 } 00:26:52.387 } 00:26:52.387 ], 00:26:52.387 "mp_policy": "active_passive" 00:26:52.387 } 00:26:52.387 } 00:26:52.387 ] 00:26:52.387 15:12:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.388 15:12:11 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.388 15:12:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.388 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.388 15:12:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.388 15:12:11 -- host/async_init.sh@53 -- # mktemp 00:26:52.388 15:12:11 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.TmQLlkre1N 00:26:52.388 15:12:11 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:52.388 15:12:11 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.TmQLlkre1N 00:26:52.388 15:12:11 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:52.388 15:12:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.388 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.388 15:12:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.388 15:12:11 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:52.388 15:12:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.388 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.388 [2024-06-11 15:12:11.175529] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:52.388 [2024-06-11 15:12:11.175659] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:52.388 15:12:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.388 15:12:11 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TmQLlkre1N 00:26:52.388 15:12:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.388 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.388 15:12:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.388 15:12:11 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TmQLlkre1N 00:26:52.388 15:12:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.388 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.388 [2024-06-11 15:12:11.191574] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:52.647 nvme0n1 00:26:52.647 15:12:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.647 15:12:11 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:52.647 15:12:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.647 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.647 [ 00:26:52.647 { 00:26:52.647 "name": "nvme0n1", 00:26:52.647 "aliases": [ 00:26:52.647 "549c2f6b-fd79-46cb-88ff-c26a48fa4647" 00:26:52.647 ], 00:26:52.647 "product_name": "NVMe disk", 00:26:52.647 "block_size": 512, 00:26:52.647 "num_blocks": 2097152, 00:26:52.647 "uuid": "549c2f6b-fd79-46cb-88ff-c26a48fa4647", 00:26:52.647 "assigned_rate_limits": { 00:26:52.647 "rw_ios_per_sec": 0, 00:26:52.647 "rw_mbytes_per_sec": 0, 00:26:52.647 "r_mbytes_per_sec": 0, 00:26:52.647 "w_mbytes_per_sec": 0 00:26:52.647 }, 00:26:52.647 "claimed": false, 00:26:52.647 "zoned": false, 00:26:52.647 "supported_io_types": { 00:26:52.647 "read": true, 00:26:52.647 "write": true, 00:26:52.647 "unmap": false, 00:26:52.647 "write_zeroes": true, 00:26:52.647 "flush": true, 00:26:52.647 "reset": true, 00:26:52.647 "compare": true, 00:26:52.647 "compare_and_write": true, 00:26:52.647 "abort": true, 00:26:52.647 "nvme_admin": true, 00:26:52.647 "nvme_io": true 00:26:52.647 }, 00:26:52.647 "driver_specific": { 00:26:52.647 "nvme": [ 00:26:52.647 { 00:26:52.647 "trid": { 00:26:52.647 "trtype": "TCP", 00:26:52.647 "adrfam": "IPv4", 00:26:52.647 "traddr": "10.0.0.2", 00:26:52.647 "trsvcid": "4421", 00:26:52.647 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:52.647 }, 00:26:52.647 "ctrlr_data": { 00:26:52.647 "cntlid": 3, 00:26:52.647 "vendor_id": "0x8086", 00:26:52.647 "model_number": "SPDK bdev Controller", 00:26:52.647 "serial_number": "00000000000000000000", 00:26:52.647 "firmware_revision": "24.01.1", 00:26:52.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:52.647 "oacs": { 00:26:52.647 "security": 0, 00:26:52.647 "format": 0, 00:26:52.647 "firmware": 0, 00:26:52.647 "ns_manage": 0 00:26:52.647 }, 00:26:52.647 "multi_ctrlr": true, 00:26:52.647 "ana_reporting": false 00:26:52.647 }, 00:26:52.647 "vs": { 00:26:52.647 "nvme_version": "1.3" 00:26:52.647 }, 00:26:52.647 "ns_data": { 00:26:52.647 "id": 1, 00:26:52.647 "can_share": true 00:26:52.647 } 00:26:52.647 } 00:26:52.647 ], 00:26:52.647 "mp_policy": "active_passive" 00:26:52.647 } 00:26:52.647 } 00:26:52.647 ] 00:26:52.647 15:12:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.647 15:12:11 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.647 15:12:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.647 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.647 15:12:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.647 15:12:11 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.TmQLlkre1N 00:26:52.647 15:12:11 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:52.647 15:12:11 -- host/async_init.sh@78 -- # nvmftestfini 00:26:52.647 15:12:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:52.647 15:12:11 -- nvmf/common.sh@116 -- # sync 00:26:52.647 15:12:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:52.647 15:12:11 -- nvmf/common.sh@119 -- # set +e 00:26:52.647 15:12:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:52.647 15:12:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:52.647 rmmod nvme_tcp 00:26:52.647 rmmod nvme_fabrics 00:26:52.647 rmmod nvme_keyring 00:26:52.647 15:12:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:52.647 15:12:11 -- nvmf/common.sh@123 -- # set -e 00:26:52.647 15:12:11 -- nvmf/common.sh@124 -- # return 0 00:26:52.647 15:12:11 -- nvmf/common.sh@477 -- # '[' -n 3405664 ']' 00:26:52.647 15:12:11 -- nvmf/common.sh@478 -- # killprocess 3405664 00:26:52.647 15:12:11 -- common/autotest_common.sh@926 -- # '[' -z 3405664 ']' 00:26:52.647 15:12:11 -- common/autotest_common.sh@930 -- # kill -0 3405664 00:26:52.647 15:12:11 -- common/autotest_common.sh@931 -- # uname 00:26:52.647 15:12:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:52.647 15:12:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3405664 00:26:52.647 15:12:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:52.647 15:12:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:52.647 15:12:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3405664' 00:26:52.647 killing process with pid 3405664 00:26:52.647 15:12:11 -- common/autotest_common.sh@945 -- # kill 3405664 00:26:52.647 15:12:11 -- common/autotest_common.sh@950 -- # wait 3405664 00:26:52.905 15:12:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:52.905 15:12:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:52.905 15:12:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:52.905 15:12:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.905 15:12:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:52.905 15:12:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.905 15:12:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.905 15:12:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.437 15:12:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:55.437 00:26:55.437 real 0m10.558s 00:26:55.437 user 0m3.897s 00:26:55.437 sys 0m5.288s 00:26:55.437 15:12:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.437 15:12:13 -- common/autotest_common.sh@10 -- # set +x 00:26:55.437 ************************************ 00:26:55.437 END TEST nvmf_async_init 00:26:55.437 ************************************ 00:26:55.437 15:12:13 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:55.437 15:12:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:55.437 15:12:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:55.437 15:12:13 -- common/autotest_common.sh@10 -- # set +x 00:26:55.437 ************************************ 00:26:55.437 START TEST dma 00:26:55.437 ************************************ 00:26:55.437 15:12:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:55.437 * Looking for test storage... 00:26:55.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.437 15:12:13 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.437 15:12:13 -- nvmf/common.sh@7 -- # uname -s 00:26:55.437 15:12:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.437 15:12:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.437 15:12:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.437 15:12:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.437 15:12:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.437 15:12:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.437 15:12:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.437 15:12:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.437 15:12:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.437 15:12:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.437 15:12:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:55.437 15:12:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:55.437 15:12:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.437 15:12:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.437 15:12:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.437 15:12:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.437 15:12:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.437 15:12:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.437 15:12:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.437 15:12:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.437 15:12:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.437 15:12:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.437 15:12:13 -- paths/export.sh@5 -- # export PATH 00:26:55.437 15:12:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.437 15:12:13 -- nvmf/common.sh@46 -- # : 0 00:26:55.437 15:12:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:55.437 15:12:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:55.437 15:12:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:55.437 15:12:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.437 15:12:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.437 15:12:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:55.437 15:12:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:55.437 15:12:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:55.437 15:12:13 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:55.437 15:12:13 -- host/dma.sh@13 -- # exit 0 00:26:55.437 00:26:55.437 real 0m0.109s 00:26:55.437 user 0m0.045s 00:26:55.437 sys 0m0.071s 00:26:55.437 15:12:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.437 15:12:13 -- common/autotest_common.sh@10 -- # set +x 00:26:55.437 ************************************ 00:26:55.437 END TEST dma 00:26:55.437 ************************************ 00:26:55.437 15:12:13 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:55.437 15:12:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:55.437 15:12:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:55.437 15:12:13 -- common/autotest_common.sh@10 -- # set +x 00:26:55.437 ************************************ 00:26:55.437 START TEST nvmf_identify 00:26:55.437 ************************************ 00:26:55.437 15:12:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:55.437 * Looking for test storage... 00:26:55.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.437 15:12:13 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.437 15:12:13 -- nvmf/common.sh@7 -- # uname -s 00:26:55.437 15:12:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.437 15:12:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.437 15:12:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.437 15:12:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.437 15:12:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.437 15:12:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.437 15:12:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.437 15:12:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.437 15:12:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.437 15:12:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.437 15:12:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:55.437 15:12:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:55.437 15:12:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.437 15:12:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.437 15:12:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.437 15:12:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.437 15:12:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.438 15:12:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.438 15:12:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.438 15:12:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.438 15:12:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.438 15:12:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.438 15:12:13 -- paths/export.sh@5 -- # export PATH 00:26:55.438 15:12:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.438 15:12:13 -- nvmf/common.sh@46 -- # : 0 00:26:55.438 15:12:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:55.438 15:12:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:55.438 15:12:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:55.438 15:12:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.438 15:12:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.438 15:12:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:55.438 15:12:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:55.438 15:12:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:55.438 15:12:13 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:55.438 15:12:13 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:55.438 15:12:13 -- host/identify.sh@14 -- # nvmftestinit 00:26:55.438 15:12:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:55.438 15:12:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.438 15:12:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:55.438 15:12:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:55.438 15:12:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:55.438 15:12:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.438 15:12:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.438 15:12:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.438 15:12:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:55.438 15:12:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:55.438 15:12:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:55.438 15:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:01.999 15:12:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:01.999 15:12:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:01.999 15:12:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:01.999 15:12:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:01.999 15:12:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:01.999 15:12:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:01.999 15:12:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:01.999 15:12:20 -- nvmf/common.sh@294 -- # net_devs=() 00:27:01.999 15:12:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:01.999 15:12:20 -- nvmf/common.sh@295 -- # e810=() 00:27:01.999 15:12:20 -- nvmf/common.sh@295 -- # local -ga e810 00:27:01.999 15:12:20 -- nvmf/common.sh@296 -- # x722=() 00:27:01.999 15:12:20 -- nvmf/common.sh@296 -- # local -ga x722 00:27:01.999 15:12:20 -- nvmf/common.sh@297 -- # mlx=() 00:27:01.999 15:12:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:01.999 15:12:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.999 15:12:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:01.999 15:12:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:01.999 15:12:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:01.999 15:12:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:01.999 15:12:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:01.999 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:01.999 15:12:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:01.999 15:12:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:01.999 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:01.999 15:12:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:01.999 15:12:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:01.999 15:12:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:01.999 15:12:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.000 15:12:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:02.000 15:12:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.000 15:12:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:02.000 Found net devices under 0000:af:00.0: cvl_0_0 00:27:02.000 15:12:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.000 15:12:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:02.000 15:12:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.000 15:12:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:02.000 15:12:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.000 15:12:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:02.000 Found net devices under 0000:af:00.1: cvl_0_1 00:27:02.000 15:12:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.000 15:12:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:02.000 15:12:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:02.000 15:12:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:02.000 15:12:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:02.000 15:12:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:02.000 15:12:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.000 15:12:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.000 15:12:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.000 15:12:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:02.000 15:12:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.000 15:12:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.000 15:12:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:02.000 15:12:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.000 15:12:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.000 15:12:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:02.000 15:12:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:02.000 15:12:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.000 15:12:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.000 15:12:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.000 15:12:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.000 15:12:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:02.000 15:12:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.000 15:12:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.000 15:12:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.000 15:12:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:02.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:27:02.000 00:27:02.000 --- 10.0.0.2 ping statistics --- 00:27:02.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.000 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:27:02.000 15:12:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:27:02.000 00:27:02.000 --- 10.0.0.1 ping statistics --- 00:27:02.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.000 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:27:02.000 15:12:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.000 15:12:20 -- nvmf/common.sh@410 -- # return 0 00:27:02.000 15:12:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:02.000 15:12:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.000 15:12:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:02.000 15:12:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:02.000 15:12:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.000 15:12:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:02.000 15:12:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:02.000 15:12:20 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:02.000 15:12:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:02.000 15:12:20 -- common/autotest_common.sh@10 -- # set +x 00:27:02.000 15:12:20 -- host/identify.sh@19 -- # nvmfpid=3410080 00:27:02.000 15:12:20 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:02.000 15:12:20 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:02.000 15:12:20 -- host/identify.sh@23 -- # waitforlisten 3410080 00:27:02.000 15:12:20 -- common/autotest_common.sh@819 -- # '[' -z 3410080 ']' 00:27:02.000 15:12:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.000 15:12:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:02.000 15:12:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.000 15:12:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:02.000 15:12:20 -- common/autotest_common.sh@10 -- # set +x 00:27:02.000 [2024-06-11 15:12:20.648394] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:02.000 [2024-06-11 15:12:20.648446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.000 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.000 [2024-06-11 15:12:20.744153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:02.000 [2024-06-11 15:12:20.834455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:02.000 [2024-06-11 15:12:20.834598] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.000 [2024-06-11 15:12:20.834609] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.000 [2024-06-11 15:12:20.834619] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.000 [2024-06-11 15:12:20.834660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.000 [2024-06-11 15:12:20.834776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:02.000 [2024-06-11 15:12:20.834889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:02.000 [2024-06-11 15:12:20.834890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.932 15:12:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:02.932 15:12:21 -- common/autotest_common.sh@852 -- # return 0 00:27:02.932 15:12:21 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:02.932 15:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.932 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.932 [2024-06-11 15:12:21.592737] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.932 15:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.932 15:12:21 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:02.932 15:12:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:02.932 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.932 15:12:21 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:02.932 15:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.932 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.932 Malloc0 00:27:02.932 15:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.932 15:12:21 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:02.932 15:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.932 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.932 15:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.932 15:12:21 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:02.932 15:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.932 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.932 15:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.932 15:12:21 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.932 15:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.932 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.932 [2024-06-11 15:12:21.684532] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.932 15:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.932 15:12:21 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:02.932 15:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.932 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.932 15:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.932 15:12:21 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:02.932 15:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.932 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.932 [2024-06-11 15:12:21.700332] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:02.932 [ 00:27:02.932 { 00:27:02.932 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:02.932 "subtype": "Discovery", 00:27:02.932 "listen_addresses": [ 00:27:02.932 { 00:27:02.932 "transport": "TCP", 00:27:02.932 "trtype": "TCP", 00:27:02.932 "adrfam": "IPv4", 00:27:02.932 "traddr": "10.0.0.2", 00:27:02.932 "trsvcid": "4420" 00:27:02.932 } 00:27:02.932 ], 00:27:02.932 "allow_any_host": true, 00:27:02.932 "hosts": [] 00:27:02.932 }, 00:27:02.932 { 00:27:02.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.932 "subtype": "NVMe", 00:27:02.932 "listen_addresses": [ 00:27:02.932 { 00:27:02.932 "transport": "TCP", 00:27:02.932 "trtype": "TCP", 00:27:02.932 "adrfam": "IPv4", 00:27:02.932 "traddr": "10.0.0.2", 00:27:02.932 "trsvcid": "4420" 00:27:02.932 } 00:27:02.932 ], 00:27:02.932 "allow_any_host": true, 00:27:02.932 "hosts": [], 00:27:02.932 "serial_number": "SPDK00000000000001", 00:27:02.932 "model_number": "SPDK bdev Controller", 00:27:02.932 "max_namespaces": 32, 00:27:02.932 "min_cntlid": 1, 00:27:02.932 "max_cntlid": 65519, 00:27:02.932 "namespaces": [ 00:27:02.932 { 00:27:02.932 "nsid": 1, 00:27:02.932 "bdev_name": "Malloc0", 00:27:02.932 "name": "Malloc0", 00:27:02.932 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:02.932 "eui64": "ABCDEF0123456789", 00:27:02.932 "uuid": "8ac98f2a-24c0-41f6-959d-62253da41b90" 00:27:02.932 } 00:27:02.932 ] 00:27:02.932 } 00:27:02.932 ] 00:27:02.932 15:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.932 15:12:21 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:02.932 [2024-06-11 15:12:21.733668] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:02.932 [2024-06-11 15:12:21.733703] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410360 ] 00:27:02.932 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.932 [2024-06-11 15:12:21.770587] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:02.932 [2024-06-11 15:12:21.770646] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:02.932 [2024-06-11 15:12:21.770653] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:02.932 [2024-06-11 15:12:21.770666] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:02.932 [2024-06-11 15:12:21.770676] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:02.932 [2024-06-11 15:12:21.771134] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:02.932 [2024-06-11 15:12:21.771169] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xffa9e0 0 00:27:03.193 [2024-06-11 15:12:21.789033] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:03.193 [2024-06-11 15:12:21.789054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:03.193 [2024-06-11 15:12:21.789060] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:03.193 [2024-06-11 15:12:21.789065] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:03.193 [2024-06-11 15:12:21.789113] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.789121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.789126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xffa9e0) 00:27:03.193 [2024-06-11 15:12:21.789142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:03.193 [2024-06-11 15:12:21.789165] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062730, cid 0, qid 0 00:27:03.193 [2024-06-11 15:12:21.797039] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.193 [2024-06-11 15:12:21.797051] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.193 [2024-06-11 15:12:21.797056] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797061] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062730) on tqpair=0xffa9e0 00:27:03.193 [2024-06-11 15:12:21.797077] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:03.193 [2024-06-11 15:12:21.797086] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:03.193 [2024-06-11 15:12:21.797092] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:03.193 [2024-06-11 15:12:21.797111] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797116] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xffa9e0) 00:27:03.193 [2024-06-11 15:12:21.797130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.193 [2024-06-11 15:12:21.797147] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062730, cid 0, qid 0 00:27:03.193 [2024-06-11 15:12:21.797382] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.193 [2024-06-11 15:12:21.797392] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.193 [2024-06-11 15:12:21.797397] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797402] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062730) on tqpair=0xffa9e0 00:27:03.193 [2024-06-11 15:12:21.797414] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:03.193 [2024-06-11 15:12:21.797424] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:03.193 [2024-06-11 15:12:21.797434] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797443] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xffa9e0) 00:27:03.193 [2024-06-11 15:12:21.797453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.193 [2024-06-11 15:12:21.797469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062730, cid 0, qid 0 00:27:03.193 [2024-06-11 15:12:21.797604] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.193 [2024-06-11 15:12:21.797613] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.193 [2024-06-11 15:12:21.797617] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062730) on tqpair=0xffa9e0 00:27:03.193 [2024-06-11 15:12:21.797634] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:03.193 [2024-06-11 15:12:21.797645] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:03.193 [2024-06-11 15:12:21.797654] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797659] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797663] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xffa9e0) 00:27:03.193 [2024-06-11 15:12:21.797672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.193 [2024-06-11 15:12:21.797687] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062730, cid 0, qid 0 00:27:03.193 [2024-06-11 15:12:21.797825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.193 [2024-06-11 15:12:21.797834] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.193 [2024-06-11 15:12:21.797838] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062730) on tqpair=0xffa9e0 00:27:03.193 [2024-06-11 15:12:21.797850] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:03.193 [2024-06-11 15:12:21.797863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797868] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.797873] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xffa9e0) 00:27:03.193 [2024-06-11 15:12:21.797881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.193 [2024-06-11 15:12:21.797895] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062730, cid 0, qid 0 00:27:03.193 [2024-06-11 15:12:21.798020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.193 [2024-06-11 15:12:21.798040] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.193 [2024-06-11 15:12:21.798045] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.193 [2024-06-11 15:12:21.798050] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062730) on tqpair=0xffa9e0 00:27:03.193 [2024-06-11 15:12:21.798058] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:03.193 [2024-06-11 15:12:21.798064] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:03.193 [2024-06-11 15:12:21.798075] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:03.194 [2024-06-11 15:12:21.798182] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:03.194 [2024-06-11 15:12:21.798188] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:03.194 [2024-06-11 15:12:21.798198] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.798203] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.798207] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xffa9e0) 00:27:03.194 [2024-06-11 15:12:21.798216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.194 [2024-06-11 15:12:21.798232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062730, cid 0, qid 0 00:27:03.194 [2024-06-11 15:12:21.798447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.194 [2024-06-11 15:12:21.798459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.194 [2024-06-11 15:12:21.798463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.798468] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062730) on tqpair=0xffa9e0 00:27:03.194 [2024-06-11 15:12:21.798476] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:03.194 [2024-06-11 15:12:21.798488] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.798493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.798498] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xffa9e0) 00:27:03.194 [2024-06-11 15:12:21.798506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.194 [2024-06-11 15:12:21.798520] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062730, cid 0, qid 0 00:27:03.194 [2024-06-11 15:12:21.798723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.194 [2024-06-11 15:12:21.798731] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.194 [2024-06-11 15:12:21.798735] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.798740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062730) on tqpair=0xffa9e0 00:27:03.194 [2024-06-11 15:12:21.798747] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:03.194 [2024-06-11 15:12:21.798753] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:03.194 [2024-06-11 15:12:21.798764] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:03.194 [2024-06-11 15:12:21.798774] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:03.194 [2024-06-11 15:12:21.798785] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.798790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.798795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xffa9e0) 00:27:03.194 [2024-06-11 15:12:21.798803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.194 [2024-06-11 15:12:21.798817] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062730, cid 0, qid 0 00:27:03.194 [2024-06-11 15:12:21.798980] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.194 [2024-06-11 15:12:21.798990] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.194 [2024-06-11 15:12:21.798995] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.798999] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xffa9e0): datao=0, datal=4096, cccid=0 00:27:03.194 [2024-06-11 15:12:21.799005] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1062730) on tqpair(0xffa9e0): expected_datao=0, payload_size=4096 00:27:03.194 [2024-06-11 15:12:21.799096] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.799103] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.840840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.194 [2024-06-11 15:12:21.840855] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.194 [2024-06-11 15:12:21.840860] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.840865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062730) on tqpair=0xffa9e0 00:27:03.194 [2024-06-11 15:12:21.840877] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:03.194 [2024-06-11 15:12:21.840891] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:03.194 [2024-06-11 15:12:21.840897] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:03.194 [2024-06-11 15:12:21.840904] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:03.194 [2024-06-11 15:12:21.840909] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:03.194 [2024-06-11 15:12:21.840915] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:03.194 [2024-06-11 15:12:21.840927] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:03.194 [2024-06-11 15:12:21.840937] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.840942] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.840947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xffa9e0) 00:27:03.194 [2024-06-11 15:12:21.840957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:03.194 [2024-06-11 15:12:21.840975] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062730, cid 0, qid 0 00:27:03.194 [2024-06-11 15:12:21.845036] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.194 [2024-06-11 15:12:21.845046] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.194 [2024-06-11 15:12:21.845051] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845056] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062730) on tqpair=0xffa9e0 00:27:03.194 [2024-06-11 15:12:21.845067] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845071] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845076] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xffa9e0) 00:27:03.194 [2024-06-11 15:12:21.845084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.194 [2024-06-11 15:12:21.845092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845101] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xffa9e0) 00:27:03.194 [2024-06-11 15:12:21.845109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.194 [2024-06-11 15:12:21.845116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845125] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xffa9e0) 00:27:03.194 [2024-06-11 15:12:21.845132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.194 [2024-06-11 15:12:21.845140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xffa9e0) 00:27:03.194 [2024-06-11 15:12:21.845156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.194 [2024-06-11 15:12:21.845162] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:03.194 [2024-06-11 15:12:21.845180] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:03.194 [2024-06-11 15:12:21.845189] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845194] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845198] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xffa9e0) 00:27:03.194 [2024-06-11 15:12:21.845206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.194 [2024-06-11 15:12:21.845224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062730, cid 0, qid 0 00:27:03.194 [2024-06-11 15:12:21.845231] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062890, cid 1, qid 0 00:27:03.194 [2024-06-11 15:12:21.845237] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10629f0, cid 2, qid 0 00:27:03.194 [2024-06-11 15:12:21.845243] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062b50, cid 3, qid 0 00:27:03.194 [2024-06-11 15:12:21.845249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062cb0, cid 4, qid 0 00:27:03.194 [2024-06-11 15:12:21.845505] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.194 [2024-06-11 15:12:21.845514] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.194 [2024-06-11 15:12:21.845519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.194 [2024-06-11 15:12:21.845524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062cb0) on tqpair=0xffa9e0 00:27:03.194 [2024-06-11 15:12:21.845531] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:03.195 [2024-06-11 15:12:21.845538] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:03.195 [2024-06-11 15:12:21.845552] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.845558] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.845562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xffa9e0) 00:27:03.195 [2024-06-11 15:12:21.845571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.195 [2024-06-11 15:12:21.845587] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062cb0, cid 4, qid 0 00:27:03.195 [2024-06-11 15:12:21.845731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.195 [2024-06-11 15:12:21.845739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.195 [2024-06-11 15:12:21.845744] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.845748] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xffa9e0): datao=0, datal=4096, cccid=4 00:27:03.195 [2024-06-11 15:12:21.845754] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1062cb0) on tqpair(0xffa9e0): expected_datao=0, payload_size=4096 00:27:03.195 [2024-06-11 15:12:21.845877] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.845882] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.845966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.195 [2024-06-11 15:12:21.845975] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.195 [2024-06-11 15:12:21.845979] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.845983] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062cb0) on tqpair=0xffa9e0 00:27:03.195 [2024-06-11 15:12:21.845999] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:03.195 [2024-06-11 15:12:21.846022] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.846038] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.846043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xffa9e0) 00:27:03.195 [2024-06-11 15:12:21.846052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.195 [2024-06-11 15:12:21.846061] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.846066] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.846070] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xffa9e0) 00:27:03.195 [2024-06-11 15:12:21.846078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.195 [2024-06-11 15:12:21.846102] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062cb0, cid 4, qid 0 00:27:03.195 [2024-06-11 15:12:21.846109] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062e10, cid 5, qid 0 00:27:03.195 [2024-06-11 15:12:21.846308] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.195 [2024-06-11 15:12:21.846317] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.195 [2024-06-11 15:12:21.846321] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.846326] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xffa9e0): datao=0, datal=1024, cccid=4 00:27:03.195 [2024-06-11 15:12:21.846332] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1062cb0) on tqpair(0xffa9e0): expected_datao=0, payload_size=1024 00:27:03.195 [2024-06-11 15:12:21.846341] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.846346] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.846353] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.195 [2024-06-11 15:12:21.846360] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.195 [2024-06-11 15:12:21.846365] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.846370] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062e10) on tqpair=0xffa9e0 00:27:03.195 [2024-06-11 15:12:21.892036] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.195 [2024-06-11 15:12:21.892054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.195 [2024-06-11 15:12:21.892059] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.892065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062cb0) on tqpair=0xffa9e0 00:27:03.195 [2024-06-11 15:12:21.892081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.892087] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.892092] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xffa9e0) 00:27:03.195 [2024-06-11 15:12:21.892101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.195 [2024-06-11 15:12:21.892123] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062cb0, cid 4, qid 0 00:27:03.195 [2024-06-11 15:12:21.892434] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.195 [2024-06-11 15:12:21.892442] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.195 [2024-06-11 15:12:21.892446] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.892451] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xffa9e0): datao=0, datal=3072, cccid=4 00:27:03.195 [2024-06-11 15:12:21.892456] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1062cb0) on tqpair(0xffa9e0): expected_datao=0, payload_size=3072 00:27:03.195 [2024-06-11 15:12:21.892549] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.892555] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.933225] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.195 [2024-06-11 15:12:21.933242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.195 [2024-06-11 15:12:21.933247] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.933252] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062cb0) on tqpair=0xffa9e0 00:27:03.195 [2024-06-11 15:12:21.933265] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.933271] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.933275] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xffa9e0) 00:27:03.195 [2024-06-11 15:12:21.933285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.195 [2024-06-11 15:12:21.933305] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062cb0, cid 4, qid 0 00:27:03.195 [2024-06-11 15:12:21.933441] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.195 [2024-06-11 15:12:21.933450] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.195 [2024-06-11 15:12:21.933454] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.933459] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xffa9e0): datao=0, datal=8, cccid=4 00:27:03.195 [2024-06-11 15:12:21.933464] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1062cb0) on tqpair(0xffa9e0): expected_datao=0, payload_size=8 00:27:03.195 [2024-06-11 15:12:21.933473] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.933479] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.974224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.195 [2024-06-11 15:12:21.974241] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.195 [2024-06-11 15:12:21.974246] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.195 [2024-06-11 15:12:21.974251] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062cb0) on tqpair=0xffa9e0 00:27:03.195 ===================================================== 00:27:03.195 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:03.195 ===================================================== 00:27:03.195 Controller Capabilities/Features 00:27:03.195 ================================ 00:27:03.195 Vendor ID: 0000 00:27:03.195 Subsystem Vendor ID: 0000 00:27:03.195 Serial Number: .................... 00:27:03.195 Model Number: ........................................ 00:27:03.195 Firmware Version: 24.01.1 00:27:03.195 Recommended Arb Burst: 0 00:27:03.195 IEEE OUI Identifier: 00 00 00 00:27:03.195 Multi-path I/O 00:27:03.195 May have multiple subsystem ports: No 00:27:03.195 May have multiple controllers: No 00:27:03.195 Associated with SR-IOV VF: No 00:27:03.195 Max Data Transfer Size: 131072 00:27:03.195 Max Number of Namespaces: 0 00:27:03.195 Max Number of I/O Queues: 1024 00:27:03.195 NVMe Specification Version (VS): 1.3 00:27:03.195 NVMe Specification Version (Identify): 1.3 00:27:03.195 Maximum Queue Entries: 128 00:27:03.195 Contiguous Queues Required: Yes 00:27:03.195 Arbitration Mechanisms Supported 00:27:03.195 Weighted Round Robin: Not Supported 00:27:03.195 Vendor Specific: Not Supported 00:27:03.195 Reset Timeout: 15000 ms 00:27:03.195 Doorbell Stride: 4 bytes 00:27:03.195 NVM Subsystem Reset: Not Supported 00:27:03.195 Command Sets Supported 00:27:03.195 NVM Command Set: Supported 00:27:03.195 Boot Partition: Not Supported 00:27:03.195 Memory Page Size Minimum: 4096 bytes 00:27:03.195 Memory Page Size Maximum: 4096 bytes 00:27:03.195 Persistent Memory Region: Not Supported 00:27:03.195 Optional Asynchronous Events Supported 00:27:03.195 Namespace Attribute Notices: Not Supported 00:27:03.196 Firmware Activation Notices: Not Supported 00:27:03.196 ANA Change Notices: Not Supported 00:27:03.196 PLE Aggregate Log Change Notices: Not Supported 00:27:03.196 LBA Status Info Alert Notices: Not Supported 00:27:03.196 EGE Aggregate Log Change Notices: Not Supported 00:27:03.196 Normal NVM Subsystem Shutdown event: Not Supported 00:27:03.196 Zone Descriptor Change Notices: Not Supported 00:27:03.196 Discovery Log Change Notices: Supported 00:27:03.196 Controller Attributes 00:27:03.196 128-bit Host Identifier: Not Supported 00:27:03.196 Non-Operational Permissive Mode: Not Supported 00:27:03.196 NVM Sets: Not Supported 00:27:03.196 Read Recovery Levels: Not Supported 00:27:03.196 Endurance Groups: Not Supported 00:27:03.196 Predictable Latency Mode: Not Supported 00:27:03.196 Traffic Based Keep ALive: Not Supported 00:27:03.196 Namespace Granularity: Not Supported 00:27:03.196 SQ Associations: Not Supported 00:27:03.196 UUID List: Not Supported 00:27:03.196 Multi-Domain Subsystem: Not Supported 00:27:03.196 Fixed Capacity Management: Not Supported 00:27:03.196 Variable Capacity Management: Not Supported 00:27:03.196 Delete Endurance Group: Not Supported 00:27:03.196 Delete NVM Set: Not Supported 00:27:03.196 Extended LBA Formats Supported: Not Supported 00:27:03.196 Flexible Data Placement Supported: Not Supported 00:27:03.196 00:27:03.196 Controller Memory Buffer Support 00:27:03.196 ================================ 00:27:03.196 Supported: No 00:27:03.196 00:27:03.196 Persistent Memory Region Support 00:27:03.196 ================================ 00:27:03.196 Supported: No 00:27:03.196 00:27:03.196 Admin Command Set Attributes 00:27:03.196 ============================ 00:27:03.196 Security Send/Receive: Not Supported 00:27:03.196 Format NVM: Not Supported 00:27:03.196 Firmware Activate/Download: Not Supported 00:27:03.196 Namespace Management: Not Supported 00:27:03.196 Device Self-Test: Not Supported 00:27:03.196 Directives: Not Supported 00:27:03.196 NVMe-MI: Not Supported 00:27:03.196 Virtualization Management: Not Supported 00:27:03.196 Doorbell Buffer Config: Not Supported 00:27:03.196 Get LBA Status Capability: Not Supported 00:27:03.196 Command & Feature Lockdown Capability: Not Supported 00:27:03.196 Abort Command Limit: 1 00:27:03.196 Async Event Request Limit: 4 00:27:03.196 Number of Firmware Slots: N/A 00:27:03.196 Firmware Slot 1 Read-Only: N/A 00:27:03.196 Firmware Activation Without Reset: N/A 00:27:03.196 Multiple Update Detection Support: N/A 00:27:03.196 Firmware Update Granularity: No Information Provided 00:27:03.196 Per-Namespace SMART Log: No 00:27:03.196 Asymmetric Namespace Access Log Page: Not Supported 00:27:03.196 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:03.196 Command Effects Log Page: Not Supported 00:27:03.196 Get Log Page Extended Data: Supported 00:27:03.196 Telemetry Log Pages: Not Supported 00:27:03.196 Persistent Event Log Pages: Not Supported 00:27:03.196 Supported Log Pages Log Page: May Support 00:27:03.196 Commands Supported & Effects Log Page: Not Supported 00:27:03.196 Feature Identifiers & Effects Log Page:May Support 00:27:03.196 NVMe-MI Commands & Effects Log Page: May Support 00:27:03.196 Data Area 4 for Telemetry Log: Not Supported 00:27:03.196 Error Log Page Entries Supported: 128 00:27:03.196 Keep Alive: Not Supported 00:27:03.196 00:27:03.196 NVM Command Set Attributes 00:27:03.196 ========================== 00:27:03.196 Submission Queue Entry Size 00:27:03.196 Max: 1 00:27:03.196 Min: 1 00:27:03.196 Completion Queue Entry Size 00:27:03.196 Max: 1 00:27:03.196 Min: 1 00:27:03.196 Number of Namespaces: 0 00:27:03.196 Compare Command: Not Supported 00:27:03.196 Write Uncorrectable Command: Not Supported 00:27:03.196 Dataset Management Command: Not Supported 00:27:03.196 Write Zeroes Command: Not Supported 00:27:03.196 Set Features Save Field: Not Supported 00:27:03.196 Reservations: Not Supported 00:27:03.196 Timestamp: Not Supported 00:27:03.196 Copy: Not Supported 00:27:03.196 Volatile Write Cache: Not Present 00:27:03.196 Atomic Write Unit (Normal): 1 00:27:03.196 Atomic Write Unit (PFail): 1 00:27:03.196 Atomic Compare & Write Unit: 1 00:27:03.196 Fused Compare & Write: Supported 00:27:03.196 Scatter-Gather List 00:27:03.196 SGL Command Set: Supported 00:27:03.196 SGL Keyed: Supported 00:27:03.196 SGL Bit Bucket Descriptor: Not Supported 00:27:03.196 SGL Metadata Pointer: Not Supported 00:27:03.196 Oversized SGL: Not Supported 00:27:03.196 SGL Metadata Address: Not Supported 00:27:03.196 SGL Offset: Supported 00:27:03.196 Transport SGL Data Block: Not Supported 00:27:03.196 Replay Protected Memory Block: Not Supported 00:27:03.196 00:27:03.196 Firmware Slot Information 00:27:03.196 ========================= 00:27:03.196 Active slot: 0 00:27:03.196 00:27:03.196 00:27:03.196 Error Log 00:27:03.196 ========= 00:27:03.196 00:27:03.196 Active Namespaces 00:27:03.196 ================= 00:27:03.196 Discovery Log Page 00:27:03.196 ================== 00:27:03.196 Generation Counter: 2 00:27:03.196 Number of Records: 2 00:27:03.196 Record Format: 0 00:27:03.196 00:27:03.196 Discovery Log Entry 0 00:27:03.196 ---------------------- 00:27:03.196 Transport Type: 3 (TCP) 00:27:03.196 Address Family: 1 (IPv4) 00:27:03.196 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:03.196 Entry Flags: 00:27:03.196 Duplicate Returned Information: 1 00:27:03.196 Explicit Persistent Connection Support for Discovery: 1 00:27:03.196 Transport Requirements: 00:27:03.196 Secure Channel: Not Required 00:27:03.196 Port ID: 0 (0x0000) 00:27:03.196 Controller ID: 65535 (0xffff) 00:27:03.196 Admin Max SQ Size: 128 00:27:03.196 Transport Service Identifier: 4420 00:27:03.196 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:03.196 Transport Address: 10.0.0.2 00:27:03.196 Discovery Log Entry 1 00:27:03.196 ---------------------- 00:27:03.196 Transport Type: 3 (TCP) 00:27:03.196 Address Family: 1 (IPv4) 00:27:03.196 Subsystem Type: 2 (NVM Subsystem) 00:27:03.196 Entry Flags: 00:27:03.196 Duplicate Returned Information: 0 00:27:03.196 Explicit Persistent Connection Support for Discovery: 0 00:27:03.196 Transport Requirements: 00:27:03.196 Secure Channel: Not Required 00:27:03.196 Port ID: 0 (0x0000) 00:27:03.196 Controller ID: 65535 (0xffff) 00:27:03.196 Admin Max SQ Size: 128 00:27:03.196 Transport Service Identifier: 4420 00:27:03.196 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:03.196 Transport Address: 10.0.0.2 [2024-06-11 15:12:21.974358] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:03.196 [2024-06-11 15:12:21.974376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.196 [2024-06-11 15:12:21.974385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.196 [2024-06-11 15:12:21.974393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.196 [2024-06-11 15:12:21.974400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.196 [2024-06-11 15:12:21.974414] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.196 [2024-06-11 15:12:21.974419] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.196 [2024-06-11 15:12:21.974424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xffa9e0) 00:27:03.196 [2024-06-11 15:12:21.974434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.196 [2024-06-11 15:12:21.974452] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062b50, cid 3, qid 0 00:27:03.196 [2024-06-11 15:12:21.974590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.196 [2024-06-11 15:12:21.974599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.196 [2024-06-11 15:12:21.974604] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.196 [2024-06-11 15:12:21.974609] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062b50) on tqpair=0xffa9e0 00:27:03.196 [2024-06-11 15:12:21.974619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.196 [2024-06-11 15:12:21.974626] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.196 [2024-06-11 15:12:21.974631] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xffa9e0) 00:27:03.197 [2024-06-11 15:12:21.974640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.197 [2024-06-11 15:12:21.974660] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062b50, cid 3, qid 0 00:27:03.197 [2024-06-11 15:12:21.974817] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.197 [2024-06-11 15:12:21.974826] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.197 [2024-06-11 15:12:21.974830] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.974835] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062b50) on tqpair=0xffa9e0 00:27:03.197 [2024-06-11 15:12:21.974842] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:03.197 [2024-06-11 15:12:21.974849] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:03.197 [2024-06-11 15:12:21.974862] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.974867] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.974871] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xffa9e0) 00:27:03.197 [2024-06-11 15:12:21.974880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.197 [2024-06-11 15:12:21.974894] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062b50, cid 3, qid 0 00:27:03.197 [2024-06-11 15:12:21.975021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.197 [2024-06-11 15:12:21.975036] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.197 [2024-06-11 15:12:21.975040] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975046] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062b50) on tqpair=0xffa9e0 00:27:03.197 [2024-06-11 15:12:21.975060] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975065] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975070] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xffa9e0) 00:27:03.197 [2024-06-11 15:12:21.975079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.197 [2024-06-11 15:12:21.975094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062b50, cid 3, qid 0 00:27:03.197 [2024-06-11 15:12:21.975296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.197 [2024-06-11 15:12:21.975304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.197 [2024-06-11 15:12:21.975308] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975313] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062b50) on tqpair=0xffa9e0 00:27:03.197 [2024-06-11 15:12:21.975327] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975337] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xffa9e0) 00:27:03.197 [2024-06-11 15:12:21.975345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.197 [2024-06-11 15:12:21.975359] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062b50, cid 3, qid 0 00:27:03.197 [2024-06-11 15:12:21.975491] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.197 [2024-06-11 15:12:21.975500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.197 [2024-06-11 15:12:21.975505] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975513] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062b50) on tqpair=0xffa9e0 00:27:03.197 [2024-06-11 15:12:21.975527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975532] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975537] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xffa9e0) 00:27:03.197 [2024-06-11 15:12:21.975545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.197 [2024-06-11 15:12:21.975559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062b50, cid 3, qid 0 00:27:03.197 [2024-06-11 15:12:21.975682] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.197 [2024-06-11 15:12:21.975691] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.197 [2024-06-11 15:12:21.975695] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975700] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062b50) on tqpair=0xffa9e0 00:27:03.197 [2024-06-11 15:12:21.975713] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975723] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xffa9e0) 00:27:03.197 [2024-06-11 15:12:21.975732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.197 [2024-06-11 15:12:21.975745] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062b50, cid 3, qid 0 00:27:03.197 [2024-06-11 15:12:21.975874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.197 [2024-06-11 15:12:21.975882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.197 [2024-06-11 15:12:21.975887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062b50) on tqpair=0xffa9e0 00:27:03.197 [2024-06-11 15:12:21.975905] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975911] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.975915] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xffa9e0) 00:27:03.197 [2024-06-11 15:12:21.975924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.197 [2024-06-11 15:12:21.975938] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062b50, cid 3, qid 0 00:27:03.197 [2024-06-11 15:12:21.980034] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.197 [2024-06-11 15:12:21.980048] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.197 [2024-06-11 15:12:21.980053] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.980058] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062b50) on tqpair=0xffa9e0 00:27:03.197 [2024-06-11 15:12:21.980075] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.980080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.980085] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xffa9e0) 00:27:03.197 [2024-06-11 15:12:21.980094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.197 [2024-06-11 15:12:21.980110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1062b50, cid 3, qid 0 00:27:03.197 [2024-06-11 15:12:21.980347] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.197 [2024-06-11 15:12:21.980356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.197 [2024-06-11 15:12:21.980360] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.197 [2024-06-11 15:12:21.980365] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1062b50) on tqpair=0xffa9e0 00:27:03.197 [2024-06-11 15:12:21.980380] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:27:03.197 00:27:03.197 15:12:21 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:03.197 [2024-06-11 15:12:22.016224] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:03.197 [2024-06-11 15:12:22.016257] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410369 ] 00:27:03.197 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.464 [2024-06-11 15:12:22.052314] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:03.464 [2024-06-11 15:12:22.052374] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:03.464 [2024-06-11 15:12:22.052381] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:03.464 [2024-06-11 15:12:22.052394] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:03.464 [2024-06-11 15:12:22.052403] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:03.464 [2024-06-11 15:12:22.052786] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:03.464 [2024-06-11 15:12:22.052814] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x139b9e0 0 00:27:03.465 [2024-06-11 15:12:22.067037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:03.465 [2024-06-11 15:12:22.067050] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:03.465 [2024-06-11 15:12:22.067055] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:03.465 [2024-06-11 15:12:22.067060] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:03.465 [2024-06-11 15:12:22.067096] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.067103] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.067108] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139b9e0) 00:27:03.465 [2024-06-11 15:12:22.067121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:03.465 [2024-06-11 15:12:22.067139] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403730, cid 0, qid 0 00:27:03.465 [2024-06-11 15:12:22.074036] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.465 [2024-06-11 15:12:22.074047] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.465 [2024-06-11 15:12:22.074052] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.074057] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403730) on tqpair=0x139b9e0 00:27:03.465 [2024-06-11 15:12:22.074072] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:03.465 [2024-06-11 15:12:22.074080] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:03.465 [2024-06-11 15:12:22.074087] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:03.465 [2024-06-11 15:12:22.074104] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.074109] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.074114] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139b9e0) 00:27:03.465 [2024-06-11 15:12:22.074124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.465 [2024-06-11 15:12:22.074144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403730, cid 0, qid 0 00:27:03.465 [2024-06-11 15:12:22.074353] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.465 [2024-06-11 15:12:22.074365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.465 [2024-06-11 15:12:22.074369] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.074374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403730) on tqpair=0x139b9e0 00:27:03.465 [2024-06-11 15:12:22.074385] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:03.465 [2024-06-11 15:12:22.074397] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:03.465 [2024-06-11 15:12:22.074407] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.074412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.074417] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139b9e0) 00:27:03.465 [2024-06-11 15:12:22.074427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.465 [2024-06-11 15:12:22.074443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403730, cid 0, qid 0 00:27:03.465 [2024-06-11 15:12:22.074624] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.465 [2024-06-11 15:12:22.074632] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.465 [2024-06-11 15:12:22.074637] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.074642] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403730) on tqpair=0x139b9e0 00:27:03.465 [2024-06-11 15:12:22.074650] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:03.465 [2024-06-11 15:12:22.074661] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:03.465 [2024-06-11 15:12:22.074670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.074675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.465 [2024-06-11 15:12:22.074680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139b9e0) 00:27:03.465 [2024-06-11 15:12:22.074688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.466 [2024-06-11 15:12:22.074702] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403730, cid 0, qid 0 00:27:03.466 [2024-06-11 15:12:22.074817] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.466 [2024-06-11 15:12:22.074826] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.466 [2024-06-11 15:12:22.074831] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.074836] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403730) on tqpair=0x139b9e0 00:27:03.466 [2024-06-11 15:12:22.074844] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:03.466 [2024-06-11 15:12:22.074857] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.074863] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.074868] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139b9e0) 00:27:03.466 [2024-06-11 15:12:22.074876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.466 [2024-06-11 15:12:22.074891] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403730, cid 0, qid 0 00:27:03.466 [2024-06-11 15:12:22.075008] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.466 [2024-06-11 15:12:22.075020] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.466 [2024-06-11 15:12:22.075034] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075040] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403730) on tqpair=0x139b9e0 00:27:03.466 [2024-06-11 15:12:22.075047] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:03.466 [2024-06-11 15:12:22.075053] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:03.466 [2024-06-11 15:12:22.075065] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:03.466 [2024-06-11 15:12:22.075173] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:03.466 [2024-06-11 15:12:22.075178] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:03.466 [2024-06-11 15:12:22.075188] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075193] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075198] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139b9e0) 00:27:03.466 [2024-06-11 15:12:22.075207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.466 [2024-06-11 15:12:22.075223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403730, cid 0, qid 0 00:27:03.466 [2024-06-11 15:12:22.075332] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.466 [2024-06-11 15:12:22.075341] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.466 [2024-06-11 15:12:22.075345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075350] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403730) on tqpair=0x139b9e0 00:27:03.466 [2024-06-11 15:12:22.075357] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:03.466 [2024-06-11 15:12:22.075369] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075375] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075379] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139b9e0) 00:27:03.466 [2024-06-11 15:12:22.075388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.466 [2024-06-11 15:12:22.075402] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403730, cid 0, qid 0 00:27:03.466 [2024-06-11 15:12:22.075506] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.466 [2024-06-11 15:12:22.075515] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.466 [2024-06-11 15:12:22.075519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403730) on tqpair=0x139b9e0 00:27:03.466 [2024-06-11 15:12:22.075531] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:03.466 [2024-06-11 15:12:22.075536] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:03.466 [2024-06-11 15:12:22.075547] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:03.466 [2024-06-11 15:12:22.075557] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:03.466 [2024-06-11 15:12:22.075569] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075576] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075581] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139b9e0) 00:27:03.466 [2024-06-11 15:12:22.075589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.466 [2024-06-11 15:12:22.075604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403730, cid 0, qid 0 00:27:03.466 [2024-06-11 15:12:22.075748] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.466 [2024-06-11 15:12:22.075757] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.466 [2024-06-11 15:12:22.075762] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075766] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139b9e0): datao=0, datal=4096, cccid=0 00:27:03.466 [2024-06-11 15:12:22.075772] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403730) on tqpair(0x139b9e0): expected_datao=0, payload_size=4096 00:27:03.466 [2024-06-11 15:12:22.075869] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.075875] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.466 [2024-06-11 15:12:22.116261] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.466 [2024-06-11 15:12:22.116279] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.466 [2024-06-11 15:12:22.116283] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116289] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403730) on tqpair=0x139b9e0 00:27:03.467 [2024-06-11 15:12:22.116300] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:03.467 [2024-06-11 15:12:22.116309] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:03.467 [2024-06-11 15:12:22.116315] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:03.467 [2024-06-11 15:12:22.116321] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:03.467 [2024-06-11 15:12:22.116326] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:03.467 [2024-06-11 15:12:22.116332] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:03.467 [2024-06-11 15:12:22.116345] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:03.467 [2024-06-11 15:12:22.116354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116359] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116363] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139b9e0) 00:27:03.467 [2024-06-11 15:12:22.116374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:03.467 [2024-06-11 15:12:22.116391] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403730, cid 0, qid 0 00:27:03.467 [2024-06-11 15:12:22.116511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.467 [2024-06-11 15:12:22.116521] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.467 [2024-06-11 15:12:22.116525] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116530] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403730) on tqpair=0x139b9e0 00:27:03.467 [2024-06-11 15:12:22.116539] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116544] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116548] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139b9e0) 00:27:03.467 [2024-06-11 15:12:22.116560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.467 [2024-06-11 15:12:22.116568] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116572] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116577] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x139b9e0) 00:27:03.467 [2024-06-11 15:12:22.116584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.467 [2024-06-11 15:12:22.116592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116601] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x139b9e0) 00:27:03.467 [2024-06-11 15:12:22.116608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.467 [2024-06-11 15:12:22.116615] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116620] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116624] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.467 [2024-06-11 15:12:22.116632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.467 [2024-06-11 15:12:22.116637] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:03.467 [2024-06-11 15:12:22.116652] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:03.467 [2024-06-11 15:12:22.116661] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116670] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139b9e0) 00:27:03.467 [2024-06-11 15:12:22.116678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.467 [2024-06-11 15:12:22.116695] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403730, cid 0, qid 0 00:27:03.467 [2024-06-11 15:12:22.116702] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403890, cid 1, qid 0 00:27:03.467 [2024-06-11 15:12:22.116708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14039f0, cid 2, qid 0 00:27:03.467 [2024-06-11 15:12:22.116714] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.467 [2024-06-11 15:12:22.116719] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403cb0, cid 4, qid 0 00:27:03.467 [2024-06-11 15:12:22.116859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.467 [2024-06-11 15:12:22.116868] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.467 [2024-06-11 15:12:22.116873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.467 [2024-06-11 15:12:22.116877] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403cb0) on tqpair=0x139b9e0 00:27:03.467 [2024-06-11 15:12:22.116884] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:03.467 [2024-06-11 15:12:22.116890] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:03.468 [2024-06-11 15:12:22.116902] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:03.468 [2024-06-11 15:12:22.116910] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:03.468 [2024-06-11 15:12:22.116921] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.116926] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.116930] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139b9e0) 00:27:03.468 [2024-06-11 15:12:22.116939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:03.468 [2024-06-11 15:12:22.116954] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403cb0, cid 4, qid 0 00:27:03.468 [2024-06-11 15:12:22.117087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.468 [2024-06-11 15:12:22.117097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.468 [2024-06-11 15:12:22.117101] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117106] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403cb0) on tqpair=0x139b9e0 00:27:03.468 [2024-06-11 15:12:22.117170] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:03.468 [2024-06-11 15:12:22.117182] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:03.468 [2024-06-11 15:12:22.117192] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117197] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117201] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139b9e0) 00:27:03.468 [2024-06-11 15:12:22.117210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.468 [2024-06-11 15:12:22.117225] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403cb0, cid 4, qid 0 00:27:03.468 [2024-06-11 15:12:22.117350] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.468 [2024-06-11 15:12:22.117360] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.468 [2024-06-11 15:12:22.117365] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117369] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139b9e0): datao=0, datal=4096, cccid=4 00:27:03.468 [2024-06-11 15:12:22.117375] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403cb0) on tqpair(0x139b9e0): expected_datao=0, payload_size=4096 00:27:03.468 [2024-06-11 15:12:22.117385] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117389] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117523] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.468 [2024-06-11 15:12:22.117531] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.468 [2024-06-11 15:12:22.117536] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117541] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403cb0) on tqpair=0x139b9e0 00:27:03.468 [2024-06-11 15:12:22.117558] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:03.468 [2024-06-11 15:12:22.117570] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:03.468 [2024-06-11 15:12:22.117582] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:03.468 [2024-06-11 15:12:22.117591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117595] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139b9e0) 00:27:03.468 [2024-06-11 15:12:22.117609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.468 [2024-06-11 15:12:22.117630] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403cb0, cid 4, qid 0 00:27:03.468 [2024-06-11 15:12:22.117762] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.468 [2024-06-11 15:12:22.117772] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.468 [2024-06-11 15:12:22.117776] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117781] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139b9e0): datao=0, datal=4096, cccid=4 00:27:03.468 [2024-06-11 15:12:22.117787] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403cb0) on tqpair(0x139b9e0): expected_datao=0, payload_size=4096 00:27:03.468 [2024-06-11 15:12:22.117796] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117801] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117906] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.468 [2024-06-11 15:12:22.117914] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.468 [2024-06-11 15:12:22.117919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117923] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403cb0) on tqpair=0x139b9e0 00:27:03.468 [2024-06-11 15:12:22.117940] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:03.468 [2024-06-11 15:12:22.117953] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:03.468 [2024-06-11 15:12:22.117963] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117968] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.117972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139b9e0) 00:27:03.468 [2024-06-11 15:12:22.117981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.468 [2024-06-11 15:12:22.117996] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403cb0, cid 4, qid 0 00:27:03.468 [2024-06-11 15:12:22.122039] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.468 [2024-06-11 15:12:22.122051] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.468 [2024-06-11 15:12:22.122055] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.468 [2024-06-11 15:12:22.122060] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139b9e0): datao=0, datal=4096, cccid=4 00:27:03.468 [2024-06-11 15:12:22.122065] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403cb0) on tqpair(0x139b9e0): expected_datao=0, payload_size=4096 00:27:03.469 [2024-06-11 15:12:22.122075] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122080] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.469 [2024-06-11 15:12:22.122094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.469 [2024-06-11 15:12:22.122098] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122103] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403cb0) on tqpair=0x139b9e0 00:27:03.469 [2024-06-11 15:12:22.122114] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:03.469 [2024-06-11 15:12:22.122125] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:03.469 [2024-06-11 15:12:22.122136] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:03.469 [2024-06-11 15:12:22.122144] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:03.469 [2024-06-11 15:12:22.122153] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:03.469 [2024-06-11 15:12:22.122160] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:03.469 [2024-06-11 15:12:22.122166] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:03.469 [2024-06-11 15:12:22.122172] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:03.469 [2024-06-11 15:12:22.122188] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122193] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122197] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139b9e0) 00:27:03.469 [2024-06-11 15:12:22.122206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.469 [2024-06-11 15:12:22.122214] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122219] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122223] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139b9e0) 00:27:03.469 [2024-06-11 15:12:22.122231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.469 [2024-06-11 15:12:22.122249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403cb0, cid 4, qid 0 00:27:03.469 [2024-06-11 15:12:22.122256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403e10, cid 5, qid 0 00:27:03.469 [2024-06-11 15:12:22.122479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.469 [2024-06-11 15:12:22.122489] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.469 [2024-06-11 15:12:22.122493] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403cb0) on tqpair=0x139b9e0 00:27:03.469 [2024-06-11 15:12:22.122508] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.469 [2024-06-11 15:12:22.122515] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.469 [2024-06-11 15:12:22.122520] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403e10) on tqpair=0x139b9e0 00:27:03.469 [2024-06-11 15:12:22.122538] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122548] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139b9e0) 00:27:03.469 [2024-06-11 15:12:22.122556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.469 [2024-06-11 15:12:22.122570] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403e10, cid 5, qid 0 00:27:03.469 [2024-06-11 15:12:22.122695] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.469 [2024-06-11 15:12:22.122704] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.469 [2024-06-11 15:12:22.122708] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122713] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403e10) on tqpair=0x139b9e0 00:27:03.469 [2024-06-11 15:12:22.122727] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.469 [2024-06-11 15:12:22.122736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139b9e0) 00:27:03.469 [2024-06-11 15:12:22.122744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.469 [2024-06-11 15:12:22.122761] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403e10, cid 5, qid 0 00:27:03.469 [2024-06-11 15:12:22.122873] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.469 [2024-06-11 15:12:22.122881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.469 [2024-06-11 15:12:22.122886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.122891] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403e10) on tqpair=0x139b9e0 00:27:03.470 [2024-06-11 15:12:22.122903] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.122908] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.122912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139b9e0) 00:27:03.470 [2024-06-11 15:12:22.122920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.470 [2024-06-11 15:12:22.122934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403e10, cid 5, qid 0 00:27:03.470 [2024-06-11 15:12:22.123068] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.470 [2024-06-11 15:12:22.123079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.470 [2024-06-11 15:12:22.123083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123088] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403e10) on tqpair=0x139b9e0 00:27:03.470 [2024-06-11 15:12:22.123104] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123110] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123114] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139b9e0) 00:27:03.470 [2024-06-11 15:12:22.123123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.470 [2024-06-11 15:12:22.123132] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123136] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139b9e0) 00:27:03.470 [2024-06-11 15:12:22.123149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.470 [2024-06-11 15:12:22.123157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123162] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123167] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x139b9e0) 00:27:03.470 [2024-06-11 15:12:22.123175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.470 [2024-06-11 15:12:22.123183] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123188] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123192] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x139b9e0) 00:27:03.470 [2024-06-11 15:12:22.123200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.470 [2024-06-11 15:12:22.123217] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403e10, cid 5, qid 0 00:27:03.470 [2024-06-11 15:12:22.123224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403cb0, cid 4, qid 0 00:27:03.470 [2024-06-11 15:12:22.123230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403f70, cid 6, qid 0 00:27:03.470 [2024-06-11 15:12:22.123239] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14040d0, cid 7, qid 0 00:27:03.470 [2024-06-11 15:12:22.123404] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.470 [2024-06-11 15:12:22.123414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.470 [2024-06-11 15:12:22.123419] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123423] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139b9e0): datao=0, datal=8192, cccid=5 00:27:03.470 [2024-06-11 15:12:22.123428] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403e10) on tqpair(0x139b9e0): expected_datao=0, payload_size=8192 00:27:03.470 [2024-06-11 15:12:22.123654] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123659] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123666] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.470 [2024-06-11 15:12:22.123673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.470 [2024-06-11 15:12:22.123678] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123682] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139b9e0): datao=0, datal=512, cccid=4 00:27:03.470 [2024-06-11 15:12:22.123688] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403cb0) on tqpair(0x139b9e0): expected_datao=0, payload_size=512 00:27:03.470 [2024-06-11 15:12:22.123696] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123701] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.470 [2024-06-11 15:12:22.123715] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.470 [2024-06-11 15:12:22.123719] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123724] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139b9e0): datao=0, datal=512, cccid=6 00:27:03.470 [2024-06-11 15:12:22.123729] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403f70) on tqpair(0x139b9e0): expected_datao=0, payload_size=512 00:27:03.470 [2024-06-11 15:12:22.123738] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.470 [2024-06-11 15:12:22.123742] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.471 [2024-06-11 15:12:22.123749] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.471 [2024-06-11 15:12:22.123756] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.471 [2024-06-11 15:12:22.123761] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.471 [2024-06-11 15:12:22.123765] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139b9e0): datao=0, datal=4096, cccid=7 00:27:03.471 [2024-06-11 15:12:22.123770] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14040d0) on tqpair(0x139b9e0): expected_datao=0, payload_size=4096 00:27:03.471 [2024-06-11 15:12:22.123779] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.471 [2024-06-11 15:12:22.123784] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.471 [2024-06-11 15:12:22.123808] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.471 [2024-06-11 15:12:22.123816] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.471 [2024-06-11 15:12:22.123821] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.471 [2024-06-11 15:12:22.123826] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403e10) on tqpair=0x139b9e0 00:27:03.471 [2024-06-11 15:12:22.123843] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.471 [2024-06-11 15:12:22.123851] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.471 [2024-06-11 15:12:22.123856] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.471 [2024-06-11 15:12:22.123860] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403cb0) on tqpair=0x139b9e0 00:27:03.471 [2024-06-11 15:12:22.123871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.471 [2024-06-11 15:12:22.123882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.471 [2024-06-11 15:12:22.123887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.471 [2024-06-11 15:12:22.123891] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403f70) on tqpair=0x139b9e0 00:27:03.471 [2024-06-11 15:12:22.123901] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.471 [2024-06-11 15:12:22.123908] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.471 [2024-06-11 15:12:22.123912] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.471 [2024-06-11 15:12:22.123917] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14040d0) on tqpair=0x139b9e0 00:27:03.471 ===================================================== 00:27:03.471 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:03.471 ===================================================== 00:27:03.471 Controller Capabilities/Features 00:27:03.471 ================================ 00:27:03.471 Vendor ID: 8086 00:27:03.471 Subsystem Vendor ID: 8086 00:27:03.471 Serial Number: SPDK00000000000001 00:27:03.471 Model Number: SPDK bdev Controller 00:27:03.471 Firmware Version: 24.01.1 00:27:03.471 Recommended Arb Burst: 6 00:27:03.471 IEEE OUI Identifier: e4 d2 5c 00:27:03.471 Multi-path I/O 00:27:03.471 May have multiple subsystem ports: Yes 00:27:03.471 May have multiple controllers: Yes 00:27:03.471 Associated with SR-IOV VF: No 00:27:03.471 Max Data Transfer Size: 131072 00:27:03.471 Max Number of Namespaces: 32 00:27:03.471 Max Number of I/O Queues: 127 00:27:03.471 NVMe Specification Version (VS): 1.3 00:27:03.471 NVMe Specification Version (Identify): 1.3 00:27:03.471 Maximum Queue Entries: 128 00:27:03.471 Contiguous Queues Required: Yes 00:27:03.471 Arbitration Mechanisms Supported 00:27:03.471 Weighted Round Robin: Not Supported 00:27:03.471 Vendor Specific: Not Supported 00:27:03.471 Reset Timeout: 15000 ms 00:27:03.471 Doorbell Stride: 4 bytes 00:27:03.471 NVM Subsystem Reset: Not Supported 00:27:03.471 Command Sets Supported 00:27:03.471 NVM Command Set: Supported 00:27:03.471 Boot Partition: Not Supported 00:27:03.471 Memory Page Size Minimum: 4096 bytes 00:27:03.471 Memory Page Size Maximum: 4096 bytes 00:27:03.471 Persistent Memory Region: Not Supported 00:27:03.471 Optional Asynchronous Events Supported 00:27:03.471 Namespace Attribute Notices: Supported 00:27:03.471 Firmware Activation Notices: Not Supported 00:27:03.471 ANA Change Notices: Not Supported 00:27:03.471 PLE Aggregate Log Change Notices: Not Supported 00:27:03.471 LBA Status Info Alert Notices: Not Supported 00:27:03.471 EGE Aggregate Log Change Notices: Not Supported 00:27:03.471 Normal NVM Subsystem Shutdown event: Not Supported 00:27:03.471 Zone Descriptor Change Notices: Not Supported 00:27:03.471 Discovery Log Change Notices: Not Supported 00:27:03.471 Controller Attributes 00:27:03.471 128-bit Host Identifier: Supported 00:27:03.471 Non-Operational Permissive Mode: Not Supported 00:27:03.471 NVM Sets: Not Supported 00:27:03.471 Read Recovery Levels: Not Supported 00:27:03.471 Endurance Groups: Not Supported 00:27:03.471 Predictable Latency Mode: Not Supported 00:27:03.471 Traffic Based Keep ALive: Not Supported 00:27:03.471 Namespace Granularity: Not Supported 00:27:03.471 SQ Associations: Not Supported 00:27:03.471 UUID List: Not Supported 00:27:03.471 Multi-Domain Subsystem: Not Supported 00:27:03.471 Fixed Capacity Management: Not Supported 00:27:03.471 Variable Capacity Management: Not Supported 00:27:03.471 Delete Endurance Group: Not Supported 00:27:03.471 Delete NVM Set: Not Supported 00:27:03.471 Extended LBA Formats Supported: Not Supported 00:27:03.471 Flexible Data Placement Supported: Not Supported 00:27:03.471 00:27:03.471 Controller Memory Buffer Support 00:27:03.471 ================================ 00:27:03.471 Supported: No 00:27:03.471 00:27:03.471 Persistent Memory Region Support 00:27:03.471 ================================ 00:27:03.471 Supported: No 00:27:03.471 00:27:03.471 Admin Command Set Attributes 00:27:03.471 ============================ 00:27:03.471 Security Send/Receive: Not Supported 00:27:03.471 Format NVM: Not Supported 00:27:03.471 Firmware Activate/Download: Not Supported 00:27:03.471 Namespace Management: Not Supported 00:27:03.471 Device Self-Test: Not Supported 00:27:03.472 Directives: Not Supported 00:27:03.472 NVMe-MI: Not Supported 00:27:03.472 Virtualization Management: Not Supported 00:27:03.472 Doorbell Buffer Config: Not Supported 00:27:03.472 Get LBA Status Capability: Not Supported 00:27:03.472 Command & Feature Lockdown Capability: Not Supported 00:27:03.472 Abort Command Limit: 4 00:27:03.472 Async Event Request Limit: 4 00:27:03.472 Number of Firmware Slots: N/A 00:27:03.472 Firmware Slot 1 Read-Only: N/A 00:27:03.472 Firmware Activation Without Reset: N/A 00:27:03.472 Multiple Update Detection Support: N/A 00:27:03.472 Firmware Update Granularity: No Information Provided 00:27:03.472 Per-Namespace SMART Log: No 00:27:03.472 Asymmetric Namespace Access Log Page: Not Supported 00:27:03.472 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:03.472 Command Effects Log Page: Supported 00:27:03.472 Get Log Page Extended Data: Supported 00:27:03.472 Telemetry Log Pages: Not Supported 00:27:03.472 Persistent Event Log Pages: Not Supported 00:27:03.472 Supported Log Pages Log Page: May Support 00:27:03.472 Commands Supported & Effects Log Page: Not Supported 00:27:03.472 Feature Identifiers & Effects Log Page:May Support 00:27:03.472 NVMe-MI Commands & Effects Log Page: May Support 00:27:03.472 Data Area 4 for Telemetry Log: Not Supported 00:27:03.472 Error Log Page Entries Supported: 128 00:27:03.472 Keep Alive: Supported 00:27:03.472 Keep Alive Granularity: 10000 ms 00:27:03.472 00:27:03.472 NVM Command Set Attributes 00:27:03.472 ========================== 00:27:03.472 Submission Queue Entry Size 00:27:03.472 Max: 64 00:27:03.472 Min: 64 00:27:03.472 Completion Queue Entry Size 00:27:03.472 Max: 16 00:27:03.472 Min: 16 00:27:03.472 Number of Namespaces: 32 00:27:03.472 Compare Command: Supported 00:27:03.472 Write Uncorrectable Command: Not Supported 00:27:03.472 Dataset Management Command: Supported 00:27:03.472 Write Zeroes Command: Supported 00:27:03.472 Set Features Save Field: Not Supported 00:27:03.472 Reservations: Supported 00:27:03.472 Timestamp: Not Supported 00:27:03.472 Copy: Supported 00:27:03.472 Volatile Write Cache: Present 00:27:03.472 Atomic Write Unit (Normal): 1 00:27:03.472 Atomic Write Unit (PFail): 1 00:27:03.472 Atomic Compare & Write Unit: 1 00:27:03.472 Fused Compare & Write: Supported 00:27:03.472 Scatter-Gather List 00:27:03.472 SGL Command Set: Supported 00:27:03.472 SGL Keyed: Supported 00:27:03.472 SGL Bit Bucket Descriptor: Not Supported 00:27:03.472 SGL Metadata Pointer: Not Supported 00:27:03.472 Oversized SGL: Not Supported 00:27:03.472 SGL Metadata Address: Not Supported 00:27:03.472 SGL Offset: Supported 00:27:03.472 Transport SGL Data Block: Not Supported 00:27:03.472 Replay Protected Memory Block: Not Supported 00:27:03.472 00:27:03.472 Firmware Slot Information 00:27:03.472 ========================= 00:27:03.472 Active slot: 1 00:27:03.472 Slot 1 Firmware Revision: 24.01.1 00:27:03.472 00:27:03.472 00:27:03.472 Commands Supported and Effects 00:27:03.472 ============================== 00:27:03.472 Admin Commands 00:27:03.472 -------------- 00:27:03.472 Get Log Page (02h): Supported 00:27:03.472 Identify (06h): Supported 00:27:03.472 Abort (08h): Supported 00:27:03.472 Set Features (09h): Supported 00:27:03.472 Get Features (0Ah): Supported 00:27:03.472 Asynchronous Event Request (0Ch): Supported 00:27:03.472 Keep Alive (18h): Supported 00:27:03.472 I/O Commands 00:27:03.472 ------------ 00:27:03.472 Flush (00h): Supported LBA-Change 00:27:03.472 Write (01h): Supported LBA-Change 00:27:03.472 Read (02h): Supported 00:27:03.472 Compare (05h): Supported 00:27:03.472 Write Zeroes (08h): Supported LBA-Change 00:27:03.472 Dataset Management (09h): Supported LBA-Change 00:27:03.472 Copy (19h): Supported LBA-Change 00:27:03.472 Unknown (79h): Supported LBA-Change 00:27:03.472 Unknown (7Ah): Supported 00:27:03.472 00:27:03.472 Error Log 00:27:03.472 ========= 00:27:03.472 00:27:03.472 Arbitration 00:27:03.472 =========== 00:27:03.472 Arbitration Burst: 1 00:27:03.472 00:27:03.472 Power Management 00:27:03.472 ================ 00:27:03.472 Number of Power States: 1 00:27:03.472 Current Power State: Power State #0 00:27:03.472 Power State #0: 00:27:03.472 Max Power: 0.00 W 00:27:03.472 Non-Operational State: Operational 00:27:03.472 Entry Latency: Not Reported 00:27:03.472 Exit Latency: Not Reported 00:27:03.472 Relative Read Throughput: 0 00:27:03.472 Relative Read Latency: 0 00:27:03.473 Relative Write Throughput: 0 00:27:03.473 Relative Write Latency: 0 00:27:03.473 Idle Power: Not Reported 00:27:03.473 Active Power: Not Reported 00:27:03.473 Non-Operational Permissive Mode: Not Supported 00:27:03.473 00:27:03.473 Health Information 00:27:03.473 ================== 00:27:03.473 Critical Warnings: 00:27:03.473 Available Spare Space: OK 00:27:03.473 Temperature: OK 00:27:03.473 Device Reliability: OK 00:27:03.473 Read Only: No 00:27:03.473 Volatile Memory Backup: OK 00:27:03.473 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:03.473 Temperature Threshold: [2024-06-11 15:12:22.124048] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.473 [2024-06-11 15:12:22.124055] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.473 [2024-06-11 15:12:22.124060] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x139b9e0) 00:27:03.473 [2024-06-11 15:12:22.124069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.473 [2024-06-11 15:12:22.124085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14040d0, cid 7, qid 0 00:27:03.473 [2024-06-11 15:12:22.124254] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.473 [2024-06-11 15:12:22.124263] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.473 [2024-06-11 15:12:22.124267] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.473 [2024-06-11 15:12:22.124272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14040d0) on tqpair=0x139b9e0 00:27:03.473 [2024-06-11 15:12:22.124310] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:03.473 [2024-06-11 15:12:22.124327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.473 [2024-06-11 15:12:22.124335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.473 [2024-06-11 15:12:22.124343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.473 [2024-06-11 15:12:22.124351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.478 [2024-06-11 15:12:22.124361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124365] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124370] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.478 [2024-06-11 15:12:22.124379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.478 [2024-06-11 15:12:22.124395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.478 [2024-06-11 15:12:22.124515] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.478 [2024-06-11 15:12:22.124524] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.478 [2024-06-11 15:12:22.124528] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124533] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403b50) on tqpair=0x139b9e0 00:27:03.478 [2024-06-11 15:12:22.124543] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124548] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124552] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.478 [2024-06-11 15:12:22.124561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.478 [2024-06-11 15:12:22.124580] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.478 [2024-06-11 15:12:22.124729] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.478 [2024-06-11 15:12:22.124739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.478 [2024-06-11 15:12:22.124743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124748] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403b50) on tqpair=0x139b9e0 00:27:03.478 [2024-06-11 15:12:22.124754] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:03.478 [2024-06-11 15:12:22.124760] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:03.478 [2024-06-11 15:12:22.124772] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.478 [2024-06-11 15:12:22.124791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.478 [2024-06-11 15:12:22.124805] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.478 [2024-06-11 15:12:22.124914] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.478 [2024-06-11 15:12:22.124923] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.478 [2024-06-11 15:12:22.124927] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403b50) on tqpair=0x139b9e0 00:27:03.478 [2024-06-11 15:12:22.124946] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.124956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.478 [2024-06-11 15:12:22.124965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.478 [2024-06-11 15:12:22.124979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.478 [2024-06-11 15:12:22.125139] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.478 [2024-06-11 15:12:22.125148] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.478 [2024-06-11 15:12:22.125152] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.125157] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403b50) on tqpair=0x139b9e0 00:27:03.478 [2024-06-11 15:12:22.125170] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.125175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.125180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.478 [2024-06-11 15:12:22.125189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.478 [2024-06-11 15:12:22.125203] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.478 [2024-06-11 15:12:22.125339] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.478 [2024-06-11 15:12:22.125348] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.478 [2024-06-11 15:12:22.125352] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.125357] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403b50) on tqpair=0x139b9e0 00:27:03.478 [2024-06-11 15:12:22.125370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.125376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.125380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.478 [2024-06-11 15:12:22.125389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.478 [2024-06-11 15:12:22.125406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.478 [2024-06-11 15:12:22.125607] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.478 [2024-06-11 15:12:22.125615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.478 [2024-06-11 15:12:22.125620] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.125624] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403b50) on tqpair=0x139b9e0 00:27:03.478 [2024-06-11 15:12:22.125638] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.478 [2024-06-11 15:12:22.125643] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.125648] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.479 [2024-06-11 15:12:22.125656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.479 [2024-06-11 15:12:22.125670] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.479 [2024-06-11 15:12:22.125780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.479 [2024-06-11 15:12:22.125789] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.479 [2024-06-11 15:12:22.125793] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.125798] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403b50) on tqpair=0x139b9e0 00:27:03.479 [2024-06-11 15:12:22.125811] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.125816] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.125821] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.479 [2024-06-11 15:12:22.125829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.479 [2024-06-11 15:12:22.125842] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.479 [2024-06-11 15:12:22.125966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.479 [2024-06-11 15:12:22.125975] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.479 [2024-06-11 15:12:22.125979] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.125984] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403b50) on tqpair=0x139b9e0 00:27:03.479 [2024-06-11 15:12:22.125997] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.126003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.126007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.479 [2024-06-11 15:12:22.126016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.479 [2024-06-11 15:12:22.130030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.479 [2024-06-11 15:12:22.130045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.479 [2024-06-11 15:12:22.130053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.479 [2024-06-11 15:12:22.130058] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.130062] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403b50) on tqpair=0x139b9e0 00:27:03.479 [2024-06-11 15:12:22.130077] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.130082] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.130087] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139b9e0) 00:27:03.479 [2024-06-11 15:12:22.130096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.479 [2024-06-11 15:12:22.130115] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b50, cid 3, qid 0 00:27:03.479 [2024-06-11 15:12:22.130335] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.479 [2024-06-11 15:12:22.130344] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.479 [2024-06-11 15:12:22.130348] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.479 [2024-06-11 15:12:22.130353] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1403b50) on tqpair=0x139b9e0 00:27:03.479 [2024-06-11 15:12:22.130364] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:27:03.479 0 Kelvin (-273 Celsius) 00:27:03.479 Available Spare: 0% 00:27:03.479 Available Spare Threshold: 0% 00:27:03.479 Life Percentage Used: 0% 00:27:03.479 Data Units Read: 0 00:27:03.479 Data Units Written: 0 00:27:03.479 Host Read Commands: 0 00:27:03.479 Host Write Commands: 0 00:27:03.479 Controller Busy Time: 0 minutes 00:27:03.479 Power Cycles: 0 00:27:03.479 Power On Hours: 0 hours 00:27:03.479 Unsafe Shutdowns: 0 00:27:03.479 Unrecoverable Media Errors: 0 00:27:03.479 Lifetime Error Log Entries: 0 00:27:03.479 Warning Temperature Time: 0 minutes 00:27:03.479 Critical Temperature Time: 0 minutes 00:27:03.479 00:27:03.479 Number of Queues 00:27:03.479 ================ 00:27:03.479 Number of I/O Submission Queues: 127 00:27:03.479 Number of I/O Completion Queues: 127 00:27:03.479 00:27:03.479 Active Namespaces 00:27:03.479 ================= 00:27:03.479 Namespace ID:1 00:27:03.479 Error Recovery Timeout: Unlimited 00:27:03.479 Command Set Identifier: NVM (00h) 00:27:03.479 Deallocate: Supported 00:27:03.479 Deallocated/Unwritten Error: Not Supported 00:27:03.479 Deallocated Read Value: Unknown 00:27:03.479 Deallocate in Write Zeroes: Not Supported 00:27:03.479 Deallocated Guard Field: 0xFFFF 00:27:03.479 Flush: Supported 00:27:03.479 Reservation: Supported 00:27:03.479 Namespace Sharing Capabilities: Multiple Controllers 00:27:03.479 Size (in LBAs): 131072 (0GiB) 00:27:03.479 Capacity (in LBAs): 131072 (0GiB) 00:27:03.479 Utilization (in LBAs): 131072 (0GiB) 00:27:03.479 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:03.479 EUI64: ABCDEF0123456789 00:27:03.479 UUID: 8ac98f2a-24c0-41f6-959d-62253da41b90 00:27:03.479 Thin Provisioning: Not Supported 00:27:03.479 Per-NS Atomic Units: Yes 00:27:03.479 Atomic Boundary Size (Normal): 0 00:27:03.479 Atomic Boundary Size (PFail): 0 00:27:03.479 Atomic Boundary Offset: 0 00:27:03.479 Maximum Single Source Range Length: 65535 00:27:03.479 Maximum Copy Length: 65535 00:27:03.480 Maximum Source Range Count: 1 00:27:03.480 NGUID/EUI64 Never Reused: No 00:27:03.480 Namespace Write Protected: No 00:27:03.480 Number of LBA Formats: 1 00:27:03.480 Current LBA Format: LBA Format #00 00:27:03.480 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:03.480 00:27:03.480 15:12:22 -- host/identify.sh@51 -- # sync 00:27:03.480 15:12:22 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.480 15:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.480 15:12:22 -- common/autotest_common.sh@10 -- # set +x 00:27:03.480 15:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.480 15:12:22 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:03.480 15:12:22 -- host/identify.sh@56 -- # nvmftestfini 00:27:03.480 15:12:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:03.480 15:12:22 -- nvmf/common.sh@116 -- # sync 00:27:03.480 15:12:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:03.480 15:12:22 -- nvmf/common.sh@119 -- # set +e 00:27:03.480 15:12:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:03.480 15:12:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:03.480 rmmod nvme_tcp 00:27:03.480 rmmod nvme_fabrics 00:27:03.480 rmmod nvme_keyring 00:27:03.480 15:12:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:03.480 15:12:22 -- nvmf/common.sh@123 -- # set -e 00:27:03.480 15:12:22 -- nvmf/common.sh@124 -- # return 0 00:27:03.480 15:12:22 -- nvmf/common.sh@477 -- # '[' -n 3410080 ']' 00:27:03.480 15:12:22 -- nvmf/common.sh@478 -- # killprocess 3410080 00:27:03.480 15:12:22 -- common/autotest_common.sh@926 -- # '[' -z 3410080 ']' 00:27:03.480 15:12:22 -- common/autotest_common.sh@930 -- # kill -0 3410080 00:27:03.480 15:12:22 -- common/autotest_common.sh@931 -- # uname 00:27:03.480 15:12:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:03.480 15:12:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3410080 00:27:03.480 15:12:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:03.480 15:12:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:03.480 15:12:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3410080' 00:27:03.480 killing process with pid 3410080 00:27:03.480 15:12:22 -- common/autotest_common.sh@945 -- # kill 3410080 00:27:03.480 [2024-06-11 15:12:22.284744] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:03.480 15:12:22 -- common/autotest_common.sh@950 -- # wait 3410080 00:27:03.739 15:12:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:03.739 15:12:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:03.739 15:12:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:03.739 15:12:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.739 15:12:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:03.739 15:12:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.739 15:12:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.739 15:12:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.271 15:12:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:06.271 00:27:06.271 real 0m10.749s 00:27:06.271 user 0m8.657s 00:27:06.271 sys 0m5.446s 00:27:06.271 15:12:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.271 15:12:24 -- common/autotest_common.sh@10 -- # set +x 00:27:06.271 ************************************ 00:27:06.271 END TEST nvmf_identify 00:27:06.271 ************************************ 00:27:06.271 15:12:24 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:06.271 15:12:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:06.271 15:12:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:06.271 15:12:24 -- common/autotest_common.sh@10 -- # set +x 00:27:06.271 ************************************ 00:27:06.271 START TEST nvmf_perf 00:27:06.271 ************************************ 00:27:06.271 15:12:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:06.271 * Looking for test storage... 00:27:06.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.271 15:12:24 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.271 15:12:24 -- nvmf/common.sh@7 -- # uname -s 00:27:06.271 15:12:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.271 15:12:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.271 15:12:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.271 15:12:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.271 15:12:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.271 15:12:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.271 15:12:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.271 15:12:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.271 15:12:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.271 15:12:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.271 15:12:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:06.271 15:12:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:06.271 15:12:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.271 15:12:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.271 15:12:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.271 15:12:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.271 15:12:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.271 15:12:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.271 15:12:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.271 15:12:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.271 15:12:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.271 15:12:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.271 15:12:24 -- paths/export.sh@5 -- # export PATH 00:27:06.271 15:12:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.271 15:12:24 -- nvmf/common.sh@46 -- # : 0 00:27:06.271 15:12:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:06.271 15:12:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:06.271 15:12:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:06.271 15:12:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.271 15:12:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.271 15:12:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:06.271 15:12:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:06.271 15:12:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:06.271 15:12:24 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:06.271 15:12:24 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:06.271 15:12:24 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:06.271 15:12:24 -- host/perf.sh@17 -- # nvmftestinit 00:27:06.271 15:12:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:06.271 15:12:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.271 15:12:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:06.271 15:12:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:06.271 15:12:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:06.271 15:12:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.271 15:12:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.271 15:12:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.271 15:12:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:06.271 15:12:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:06.271 15:12:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:06.271 15:12:24 -- common/autotest_common.sh@10 -- # set +x 00:27:12.830 15:12:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:12.830 15:12:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:12.830 15:12:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:12.830 15:12:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:12.830 15:12:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:12.830 15:12:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:12.830 15:12:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:12.830 15:12:30 -- nvmf/common.sh@294 -- # net_devs=() 00:27:12.830 15:12:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:12.830 15:12:30 -- nvmf/common.sh@295 -- # e810=() 00:27:12.830 15:12:30 -- nvmf/common.sh@295 -- # local -ga e810 00:27:12.830 15:12:30 -- nvmf/common.sh@296 -- # x722=() 00:27:12.830 15:12:30 -- nvmf/common.sh@296 -- # local -ga x722 00:27:12.830 15:12:30 -- nvmf/common.sh@297 -- # mlx=() 00:27:12.830 15:12:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:12.830 15:12:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.830 15:12:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:12.830 15:12:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:12.830 15:12:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:12.830 15:12:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:12.830 15:12:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:12.830 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:12.830 15:12:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:12.830 15:12:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:12.830 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:12.830 15:12:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:12.830 15:12:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:12.830 15:12:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.830 15:12:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:12.830 15:12:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.830 15:12:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:12.830 Found net devices under 0000:af:00.0: cvl_0_0 00:27:12.830 15:12:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.830 15:12:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:12.830 15:12:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.830 15:12:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:12.830 15:12:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.830 15:12:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:12.830 Found net devices under 0000:af:00.1: cvl_0_1 00:27:12.830 15:12:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.830 15:12:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:12.830 15:12:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:12.830 15:12:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:12.830 15:12:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.830 15:12:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.830 15:12:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.830 15:12:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:12.830 15:12:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.830 15:12:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.830 15:12:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:12.830 15:12:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.830 15:12:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.830 15:12:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:12.830 15:12:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:12.830 15:12:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.830 15:12:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.830 15:12:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.830 15:12:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.830 15:12:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:12.830 15:12:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.830 15:12:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.830 15:12:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.830 15:12:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:12.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:27:12.830 00:27:12.830 --- 10.0.0.2 ping statistics --- 00:27:12.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.830 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:12.830 15:12:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:27:12.830 00:27:12.830 --- 10.0.0.1 ping statistics --- 00:27:12.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.830 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:27:12.830 15:12:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.830 15:12:30 -- nvmf/common.sh@410 -- # return 0 00:27:12.830 15:12:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:12.830 15:12:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.830 15:12:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:12.830 15:12:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.830 15:12:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:12.830 15:12:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:12.830 15:12:30 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:12.830 15:12:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:12.830 15:12:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:12.830 15:12:30 -- common/autotest_common.sh@10 -- # set +x 00:27:12.831 15:12:31 -- nvmf/common.sh@469 -- # nvmfpid=3414395 00:27:12.831 15:12:31 -- nvmf/common.sh@470 -- # waitforlisten 3414395 00:27:12.831 15:12:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:12.831 15:12:31 -- common/autotest_common.sh@819 -- # '[' -z 3414395 ']' 00:27:12.831 15:12:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.831 15:12:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:12.831 15:12:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.831 15:12:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:12.831 15:12:31 -- common/autotest_common.sh@10 -- # set +x 00:27:12.831 [2024-06-11 15:12:31.052316] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:12.831 [2024-06-11 15:12:31.052371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.831 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.831 [2024-06-11 15:12:31.148309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:12.831 [2024-06-11 15:12:31.235627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:12.831 [2024-06-11 15:12:31.235772] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.831 [2024-06-11 15:12:31.235789] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.831 [2024-06-11 15:12:31.235799] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.831 [2024-06-11 15:12:31.235847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.831 [2024-06-11 15:12:31.235874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.831 [2024-06-11 15:12:31.235986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.831 [2024-06-11 15:12:31.235987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.397 15:12:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:13.397 15:12:31 -- common/autotest_common.sh@852 -- # return 0 00:27:13.397 15:12:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:13.397 15:12:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:13.397 15:12:31 -- common/autotest_common.sh@10 -- # set +x 00:27:13.397 15:12:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.397 15:12:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:13.397 15:12:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:16.673 15:12:35 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:16.673 15:12:35 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:16.673 15:12:35 -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:27:16.673 15:12:35 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:16.930 15:12:35 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:16.930 15:12:35 -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:27:16.930 15:12:35 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:16.930 15:12:35 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:16.930 15:12:35 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:17.187 [2024-06-11 15:12:35.862056] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.187 15:12:35 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.444 15:12:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:17.444 15:12:36 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:17.701 15:12:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:17.701 15:12:36 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:17.958 15:12:36 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.215 [2024-06-11 15:12:36.838132] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.215 15:12:36 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:18.472 15:12:37 -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:27:18.472 15:12:37 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:27:18.472 15:12:37 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:18.472 15:12:37 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:27:19.844 Initializing NVMe Controllers 00:27:19.844 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:27:19.844 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:27:19.844 Initialization complete. Launching workers. 00:27:19.844 ======================================================== 00:27:19.844 Latency(us) 00:27:19.844 Device Information : IOPS MiB/s Average min max 00:27:19.844 PCIE (0000:86:00.0) NSID 1 from core 0: 70289.71 274.57 454.44 26.05 7425.21 00:27:19.844 ======================================================== 00:27:19.844 Total : 70289.71 274.57 454.44 26.05 7425.21 00:27:19.844 00:27:19.844 15:12:38 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:19.844 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.214 Initializing NVMe Controllers 00:27:21.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:21.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:21.214 Initialization complete. Launching workers. 00:27:21.214 ======================================================== 00:27:21.214 Latency(us) 00:27:21.214 Device Information : IOPS MiB/s Average min max 00:27:21.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 115.00 0.45 8911.02 267.22 45091.48 00:27:21.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18281.67 7959.18 55863.75 00:27:21.214 ======================================================== 00:27:21.214 Total : 170.00 0.66 11942.70 267.22 55863.75 00:27:21.214 00:27:21.214 15:12:39 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:21.214 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.146 Initializing NVMe Controllers 00:27:22.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:22.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:22.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:22.146 Initialization complete. Launching workers. 00:27:22.146 ======================================================== 00:27:22.146 Latency(us) 00:27:22.146 Device Information : IOPS MiB/s Average min max 00:27:22.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7042.99 27.51 4551.83 653.76 9949.24 00:27:22.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3969.00 15.50 8113.31 6982.54 15833.61 00:27:22.146 ======================================================== 00:27:22.146 Total : 11011.99 43.02 5835.48 653.76 15833.61 00:27:22.146 00:27:22.146 15:12:40 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:22.146 15:12:40 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:22.146 15:12:40 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:22.146 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.671 Initializing NVMe Controllers 00:27:24.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:24.671 Controller IO queue size 128, less than required. 00:27:24.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:24.671 Controller IO queue size 128, less than required. 00:27:24.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:24.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:24.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:24.671 Initialization complete. Launching workers. 00:27:24.671 ======================================================== 00:27:24.671 Latency(us) 00:27:24.671 Device Information : IOPS MiB/s Average min max 00:27:24.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 869.00 217.25 152020.26 92303.89 236788.15 00:27:24.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 587.82 146.96 225180.43 73781.61 326239.25 00:27:24.671 ======================================================== 00:27:24.671 Total : 1456.83 364.21 181540.15 73781.61 326239.25 00:27:24.671 00:27:24.929 15:12:43 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:24.929 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.929 No valid NVMe controllers or AIO or URING devices found 00:27:24.929 Initializing NVMe Controllers 00:27:24.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:24.929 Controller IO queue size 128, less than required. 00:27:24.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:24.929 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:24.929 Controller IO queue size 128, less than required. 00:27:24.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:24.929 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:24.930 WARNING: Some requested NVMe devices were skipped 00:27:24.930 15:12:43 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:25.187 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.761 Initializing NVMe Controllers 00:27:27.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:27.761 Controller IO queue size 128, less than required. 00:27:27.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:27.761 Controller IO queue size 128, less than required. 00:27:27.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:27.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:27.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:27.761 Initialization complete. Launching workers. 00:27:27.761 00:27:27.761 ==================== 00:27:27.761 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:27.761 TCP transport: 00:27:27.761 polls: 25102 00:27:27.761 idle_polls: 7673 00:27:27.761 sock_completions: 17429 00:27:27.761 nvme_completions: 3360 00:27:27.761 submitted_requests: 5166 00:27:27.761 queued_requests: 1 00:27:27.761 00:27:27.761 ==================== 00:27:27.761 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:27.761 TCP transport: 00:27:27.761 polls: 25288 00:27:27.761 idle_polls: 7798 00:27:27.761 sock_completions: 17490 00:27:27.761 nvme_completions: 3649 00:27:27.761 submitted_requests: 5703 00:27:27.761 queued_requests: 1 00:27:27.761 ======================================================== 00:27:27.761 Latency(us) 00:27:27.761 Device Information : IOPS MiB/s Average min max 00:27:27.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 903.49 225.87 146397.01 87679.60 257350.80 00:27:27.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 975.99 244.00 134850.09 58675.73 189872.55 00:27:27.761 ======================================================== 00:27:27.761 Total : 1879.48 469.87 140400.85 58675.73 257350.80 00:27:27.761 00:27:27.761 15:12:46 -- host/perf.sh@66 -- # sync 00:27:27.761 15:12:46 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:27.761 15:12:46 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:27.761 15:12:46 -- host/perf.sh@71 -- # '[' -n 0000:86:00.0 ']' 00:27:27.761 15:12:46 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:31.043 15:12:49 -- host/perf.sh@72 -- # ls_guid=d5d42846-91b6-4c42-91b4-07c1f3a6f440 00:27:31.043 15:12:49 -- host/perf.sh@73 -- # get_lvs_free_mb d5d42846-91b6-4c42-91b4-07c1f3a6f440 00:27:31.043 15:12:49 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d5d42846-91b6-4c42-91b4-07c1f3a6f440 00:27:31.043 15:12:49 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:31.043 15:12:49 -- common/autotest_common.sh@1345 -- # local fc 00:27:31.043 15:12:49 -- common/autotest_common.sh@1346 -- # local cs 00:27:31.043 15:12:49 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:31.301 15:12:49 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:31.301 { 00:27:31.301 "uuid": "d5d42846-91b6-4c42-91b4-07c1f3a6f440", 00:27:31.301 "name": "lvs_0", 00:27:31.301 "base_bdev": "Nvme0n1", 00:27:31.301 "total_data_clusters": 238234, 00:27:31.301 "free_clusters": 238234, 00:27:31.301 "block_size": 512, 00:27:31.301 "cluster_size": 4194304 00:27:31.301 } 00:27:31.301 ]' 00:27:31.301 15:12:49 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d5d42846-91b6-4c42-91b4-07c1f3a6f440") .free_clusters' 00:27:31.301 15:12:50 -- common/autotest_common.sh@1348 -- # fc=238234 00:27:31.301 15:12:50 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d5d42846-91b6-4c42-91b4-07c1f3a6f440") .cluster_size' 00:27:31.301 15:12:50 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:31.301 15:12:50 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:27:31.301 15:12:50 -- common/autotest_common.sh@1353 -- # echo 952936 00:27:31.301 952936 00:27:31.301 15:12:50 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:27:31.301 15:12:50 -- host/perf.sh@78 -- # free_mb=20480 00:27:31.301 15:12:50 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5d42846-91b6-4c42-91b4-07c1f3a6f440 lbd_0 20480 00:27:31.866 15:12:50 -- host/perf.sh@80 -- # lb_guid=805a518c-54b5-456d-b0fe-72c330f8e693 00:27:31.866 15:12:50 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 805a518c-54b5-456d-b0fe-72c330f8e693 lvs_n_0 00:27:32.799 15:12:51 -- host/perf.sh@83 -- # ls_nested_guid=2711002f-435c-4e50-95a4-a841b822c307 00:27:32.799 15:12:51 -- host/perf.sh@84 -- # get_lvs_free_mb 2711002f-435c-4e50-95a4-a841b822c307 00:27:32.799 15:12:51 -- common/autotest_common.sh@1343 -- # local lvs_uuid=2711002f-435c-4e50-95a4-a841b822c307 00:27:32.799 15:12:51 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:32.799 15:12:51 -- common/autotest_common.sh@1345 -- # local fc 00:27:32.799 15:12:51 -- common/autotest_common.sh@1346 -- # local cs 00:27:32.799 15:12:51 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:32.799 15:12:51 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:32.799 { 00:27:32.799 "uuid": "d5d42846-91b6-4c42-91b4-07c1f3a6f440", 00:27:32.799 "name": "lvs_0", 00:27:32.799 "base_bdev": "Nvme0n1", 00:27:32.799 "total_data_clusters": 238234, 00:27:32.799 "free_clusters": 233114, 00:27:32.799 "block_size": 512, 00:27:32.799 "cluster_size": 4194304 00:27:32.799 }, 00:27:32.799 { 00:27:32.799 "uuid": "2711002f-435c-4e50-95a4-a841b822c307", 00:27:32.799 "name": "lvs_n_0", 00:27:32.799 "base_bdev": "805a518c-54b5-456d-b0fe-72c330f8e693", 00:27:32.799 "total_data_clusters": 5114, 00:27:32.799 "free_clusters": 5114, 00:27:32.799 "block_size": 512, 00:27:32.799 "cluster_size": 4194304 00:27:32.799 } 00:27:32.799 ]' 00:27:32.799 15:12:51 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="2711002f-435c-4e50-95a4-a841b822c307") .free_clusters' 00:27:32.799 15:12:51 -- common/autotest_common.sh@1348 -- # fc=5114 00:27:32.799 15:12:51 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="2711002f-435c-4e50-95a4-a841b822c307") .cluster_size' 00:27:33.057 15:12:51 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:33.057 15:12:51 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:27:33.057 15:12:51 -- common/autotest_common.sh@1353 -- # echo 20456 00:27:33.057 20456 00:27:33.057 15:12:51 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:33.057 15:12:51 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2711002f-435c-4e50-95a4-a841b822c307 lbd_nest_0 20456 00:27:33.057 15:12:51 -- host/perf.sh@88 -- # lb_nested_guid=651a15b2-8b5f-4248-b39d-d50fdd9b6c73 00:27:33.057 15:12:51 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:33.314 15:12:52 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:33.314 15:12:52 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 651a15b2-8b5f-4248-b39d-d50fdd9b6c73 00:27:33.572 15:12:52 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.830 15:12:52 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:33.830 15:12:52 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:33.830 15:12:52 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:33.830 15:12:52 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:33.830 15:12:52 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:33.830 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.021 Initializing NVMe Controllers 00:27:46.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:46.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:46.021 Initialization complete. Launching workers. 00:27:46.021 ======================================================== 00:27:46.021 Latency(us) 00:27:46.021 Device Information : IOPS MiB/s Average min max 00:27:46.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.30 0.02 20313.74 269.21 48256.11 00:27:46.021 ======================================================== 00:27:46.021 Total : 49.30 0.02 20313.74 269.21 48256.11 00:27:46.021 00:27:46.021 15:13:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:46.021 15:13:02 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:46.021 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.985 Initializing NVMe Controllers 00:27:55.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:55.985 Initialization complete. Launching workers. 00:27:55.985 ======================================================== 00:27:55.985 Latency(us) 00:27:55.985 Device Information : IOPS MiB/s Average min max 00:27:55.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 9.88 12667.31 6982.60 47885.39 00:27:55.985 ======================================================== 00:27:55.985 Total : 79.00 9.88 12667.31 6982.60 47885.39 00:27:55.985 00:27:55.985 15:13:13 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:55.985 15:13:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:55.985 15:13:13 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:55.985 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.947 Initializing NVMe Controllers 00:28:05.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:05.947 Initialization complete. Launching workers. 00:28:05.947 ======================================================== 00:28:05.947 Latency(us) 00:28:05.947 Device Information : IOPS MiB/s Average min max 00:28:05.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6372.15 3.11 5021.82 359.17 41252.28 00:28:05.947 ======================================================== 00:28:05.947 Total : 6372.15 3.11 5021.82 359.17 41252.28 00:28:05.947 00:28:05.947 15:13:23 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:05.947 15:13:23 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:05.947 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.904 Initializing NVMe Controllers 00:28:15.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:15.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:15.904 Initialization complete. Launching workers. 00:28:15.904 ======================================================== 00:28:15.904 Latency(us) 00:28:15.904 Device Information : IOPS MiB/s Average min max 00:28:15.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1718.50 214.81 18622.70 1647.95 40006.29 00:28:15.904 ======================================================== 00:28:15.904 Total : 1718.50 214.81 18622.70 1647.95 40006.29 00:28:15.904 00:28:15.904 15:13:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:15.904 15:13:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:15.904 15:13:34 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:15.904 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.869 Initializing NVMe Controllers 00:28:25.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.869 Controller IO queue size 128, less than required. 00:28:25.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:25.869 Initialization complete. Launching workers. 00:28:25.869 ======================================================== 00:28:25.869 Latency(us) 00:28:25.869 Device Information : IOPS MiB/s Average min max 00:28:25.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10077.47 4.92 12709.91 1967.72 25123.74 00:28:25.869 ======================================================== 00:28:25.869 Total : 10077.47 4.92 12709.91 1967.72 25123.74 00:28:25.869 00:28:25.869 15:13:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:25.869 15:13:44 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:25.869 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.063 Initializing NVMe Controllers 00:28:38.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.063 Controller IO queue size 128, less than required. 00:28:38.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:38.063 Initialization complete. Launching workers. 00:28:38.063 ======================================================== 00:28:38.063 Latency(us) 00:28:38.063 Device Information : IOPS MiB/s Average min max 00:28:38.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1195.58 149.45 107541.47 30759.48 221967.87 00:28:38.063 ======================================================== 00:28:38.063 Total : 1195.58 149.45 107541.47 30759.48 221967.87 00:28:38.063 00:28:38.063 15:13:54 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:38.063 15:13:55 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 651a15b2-8b5f-4248-b39d-d50fdd9b6c73 00:28:38.063 15:13:55 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:38.063 15:13:56 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 805a518c-54b5-456d-b0fe-72c330f8e693 00:28:38.063 15:13:56 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:38.063 15:13:56 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:38.063 15:13:56 -- host/perf.sh@114 -- # nvmftestfini 00:28:38.063 15:13:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:38.063 15:13:56 -- nvmf/common.sh@116 -- # sync 00:28:38.063 15:13:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:38.063 15:13:56 -- nvmf/common.sh@119 -- # set +e 00:28:38.063 15:13:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:38.063 15:13:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:38.063 rmmod nvme_tcp 00:28:38.063 rmmod nvme_fabrics 00:28:38.063 rmmod nvme_keyring 00:28:38.063 15:13:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:38.063 15:13:56 -- nvmf/common.sh@123 -- # set -e 00:28:38.063 15:13:56 -- nvmf/common.sh@124 -- # return 0 00:28:38.063 15:13:56 -- nvmf/common.sh@477 -- # '[' -n 3414395 ']' 00:28:38.063 15:13:56 -- nvmf/common.sh@478 -- # killprocess 3414395 00:28:38.063 15:13:56 -- common/autotest_common.sh@926 -- # '[' -z 3414395 ']' 00:28:38.063 15:13:56 -- common/autotest_common.sh@930 -- # kill -0 3414395 00:28:38.064 15:13:56 -- common/autotest_common.sh@931 -- # uname 00:28:38.064 15:13:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:38.064 15:13:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3414395 00:28:38.064 15:13:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:38.064 15:13:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:38.064 15:13:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3414395' 00:28:38.064 killing process with pid 3414395 00:28:38.064 15:13:56 -- common/autotest_common.sh@945 -- # kill 3414395 00:28:38.064 15:13:56 -- common/autotest_common.sh@950 -- # wait 3414395 00:28:39.965 15:13:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:39.965 15:13:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:39.965 15:13:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:39.965 15:13:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:39.965 15:13:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:39.965 15:13:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.965 15:13:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.965 15:13:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.959 15:14:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:41.959 00:28:41.959 real 1m35.786s 00:28:41.959 user 5m44.624s 00:28:41.959 sys 0m14.498s 00:28:41.959 15:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:41.959 15:14:00 -- common/autotest_common.sh@10 -- # set +x 00:28:41.959 ************************************ 00:28:41.959 END TEST nvmf_perf 00:28:41.959 ************************************ 00:28:41.959 15:14:00 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:41.959 15:14:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:41.959 15:14:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:41.959 15:14:00 -- common/autotest_common.sh@10 -- # set +x 00:28:41.959 ************************************ 00:28:41.959 START TEST nvmf_fio_host 00:28:41.959 ************************************ 00:28:41.959 15:14:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:41.959 * Looking for test storage... 00:28:41.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:41.959 15:14:00 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:41.959 15:14:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.959 15:14:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.959 15:14:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.960 15:14:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.960 15:14:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.960 15:14:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.960 15:14:00 -- paths/export.sh@5 -- # export PATH 00:28:41.960 15:14:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.960 15:14:00 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:41.960 15:14:00 -- nvmf/common.sh@7 -- # uname -s 00:28:41.960 15:14:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.960 15:14:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.960 15:14:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.960 15:14:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.960 15:14:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.960 15:14:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.960 15:14:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.960 15:14:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.960 15:14:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.960 15:14:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.960 15:14:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:41.960 15:14:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:41.960 15:14:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.960 15:14:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.960 15:14:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.960 15:14:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:41.960 15:14:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.960 15:14:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.960 15:14:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.960 15:14:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.960 15:14:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.960 15:14:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.960 15:14:00 -- paths/export.sh@5 -- # export PATH 00:28:41.960 15:14:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.960 15:14:00 -- nvmf/common.sh@46 -- # : 0 00:28:41.960 15:14:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:41.960 15:14:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:41.960 15:14:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:41.960 15:14:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.960 15:14:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.960 15:14:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:41.960 15:14:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:41.960 15:14:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:41.960 15:14:00 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:41.960 15:14:00 -- host/fio.sh@14 -- # nvmftestinit 00:28:41.960 15:14:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:41.960 15:14:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.960 15:14:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:41.960 15:14:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:41.960 15:14:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:41.960 15:14:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.960 15:14:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.960 15:14:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.960 15:14:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:41.960 15:14:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:41.960 15:14:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:41.960 15:14:00 -- common/autotest_common.sh@10 -- # set +x 00:28:48.527 15:14:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:48.527 15:14:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:48.527 15:14:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:48.527 15:14:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:48.527 15:14:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:48.527 15:14:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:48.527 15:14:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:48.527 15:14:06 -- nvmf/common.sh@294 -- # net_devs=() 00:28:48.527 15:14:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:48.527 15:14:06 -- nvmf/common.sh@295 -- # e810=() 00:28:48.527 15:14:06 -- nvmf/common.sh@295 -- # local -ga e810 00:28:48.527 15:14:06 -- nvmf/common.sh@296 -- # x722=() 00:28:48.527 15:14:06 -- nvmf/common.sh@296 -- # local -ga x722 00:28:48.527 15:14:06 -- nvmf/common.sh@297 -- # mlx=() 00:28:48.527 15:14:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:48.527 15:14:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.527 15:14:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:48.527 15:14:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:48.527 15:14:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:48.527 15:14:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:48.527 15:14:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:48.527 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:48.527 15:14:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:48.527 15:14:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:48.527 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:48.527 15:14:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:48.527 15:14:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:48.527 15:14:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.527 15:14:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:48.527 15:14:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.527 15:14:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:48.527 Found net devices under 0000:af:00.0: cvl_0_0 00:28:48.527 15:14:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.527 15:14:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:48.527 15:14:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.527 15:14:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:48.527 15:14:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.527 15:14:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:48.527 Found net devices under 0000:af:00.1: cvl_0_1 00:28:48.527 15:14:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.527 15:14:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:48.527 15:14:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:48.527 15:14:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:48.527 15:14:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:48.527 15:14:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.527 15:14:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.527 15:14:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.527 15:14:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:48.527 15:14:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.527 15:14:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.527 15:14:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:48.527 15:14:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.527 15:14:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.527 15:14:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:48.527 15:14:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:48.527 15:14:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.527 15:14:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.527 15:14:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.527 15:14:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.527 15:14:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:48.527 15:14:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.527 15:14:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.527 15:14:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.527 15:14:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:48.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:28:48.527 00:28:48.527 --- 10.0.0.2 ping statistics --- 00:28:48.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.527 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:28:48.527 15:14:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:28:48.527 00:28:48.527 --- 10.0.0.1 ping statistics --- 00:28:48.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.527 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:48.527 15:14:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.527 15:14:07 -- nvmf/common.sh@410 -- # return 0 00:28:48.527 15:14:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:48.527 15:14:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.527 15:14:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:48.527 15:14:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:48.527 15:14:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.527 15:14:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:48.527 15:14:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:48.527 15:14:07 -- host/fio.sh@16 -- # [[ y != y ]] 00:28:48.527 15:14:07 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:48.527 15:14:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:48.527 15:14:07 -- common/autotest_common.sh@10 -- # set +x 00:28:48.527 15:14:07 -- host/fio.sh@24 -- # nvmfpid=3433858 00:28:48.527 15:14:07 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.527 15:14:07 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:48.527 15:14:07 -- host/fio.sh@28 -- # waitforlisten 3433858 00:28:48.527 15:14:07 -- common/autotest_common.sh@819 -- # '[' -z 3433858 ']' 00:28:48.527 15:14:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.527 15:14:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:48.527 15:14:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.528 15:14:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:48.528 15:14:07 -- common/autotest_common.sh@10 -- # set +x 00:28:48.528 [2024-06-11 15:14:07.161487] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:48.528 [2024-06-11 15:14:07.161541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.528 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.528 [2024-06-11 15:14:07.248168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.528 [2024-06-11 15:14:07.335893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:48.528 [2024-06-11 15:14:07.336046] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.528 [2024-06-11 15:14:07.336058] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.528 [2024-06-11 15:14:07.336068] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.528 [2024-06-11 15:14:07.340055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.528 [2024-06-11 15:14:07.340077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.528 [2024-06-11 15:14:07.340188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.528 [2024-06-11 15:14:07.340189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.463 15:14:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:49.463 15:14:08 -- common/autotest_common.sh@852 -- # return 0 00:28:49.463 15:14:08 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:49.721 [2024-06-11 15:14:08.332564] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.721 15:14:08 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:49.721 15:14:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:49.721 15:14:08 -- common/autotest_common.sh@10 -- # set +x 00:28:49.721 15:14:08 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:49.978 Malloc1 00:28:49.978 15:14:08 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:50.236 15:14:08 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:50.494 15:14:09 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.751 [2024-06-11 15:14:09.376374] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.751 15:14:09 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:51.009 15:14:09 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:51.009 15:14:09 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:51.009 15:14:09 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:51.009 15:14:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:51.009 15:14:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:51.009 15:14:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:51.009 15:14:09 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:51.009 15:14:09 -- common/autotest_common.sh@1320 -- # shift 00:28:51.009 15:14:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:51.009 15:14:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:51.010 15:14:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:51.010 15:14:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:51.010 15:14:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:51.010 15:14:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:51.010 15:14:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:51.010 15:14:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:51.010 15:14:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:51.010 15:14:09 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:51.010 15:14:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:51.010 15:14:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:51.010 15:14:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:51.010 15:14:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:51.010 15:14:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:51.268 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:51.268 fio-3.35 00:28:51.268 Starting 1 thread 00:28:51.268 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.797 00:28:53.797 test: (groupid=0, jobs=1): err= 0: pid=3434549: Tue Jun 11 15:14:12 2024 00:28:53.797 read: IOPS=8463, BW=33.1MiB/s (34.7MB/s)(66.4MiB/2007msec) 00:28:53.797 slat (nsec): min=1397, max=195612, avg=2467.03, stdev=2130.57 00:28:53.797 clat (usec): min=5034, max=14355, avg=8356.56, stdev=626.48 00:28:53.797 lat (usec): min=5068, max=14357, avg=8359.02, stdev=626.39 00:28:53.797 clat percentiles (usec): 00:28:53.797 | 1.00th=[ 6915], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7898], 00:28:53.797 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8455], 00:28:53.797 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9372], 00:28:53.797 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[12518], 99.95th=[13566], 00:28:53.797 | 99.99th=[14353] 00:28:53.797 bw ( KiB/s): min=32582, max=34352, per=99.90%, avg=33823.50, stdev=835.63, samples=4 00:28:53.797 iops : min= 8145, max= 8588, avg=8455.75, stdev=209.15, samples=4 00:28:53.797 write: IOPS=8460, BW=33.0MiB/s (34.7MB/s)(66.3MiB/2007msec); 0 zone resets 00:28:53.797 slat (nsec): min=1453, max=159379, avg=2585.03, stdev=1504.02 00:28:53.797 clat (usec): min=1927, max=13261, avg=6662.98, stdev=565.78 00:28:53.797 lat (usec): min=1942, max=13262, avg=6665.56, stdev=565.68 00:28:53.797 clat percentiles (usec): 00:28:53.797 | 1.00th=[ 5342], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6259], 00:28:53.797 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:28:53.797 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7308], 95.00th=[ 7504], 00:28:53.797 | 99.00th=[ 7832], 99.50th=[ 8094], 99.90th=[11863], 99.95th=[12649], 00:28:53.797 | 99.99th=[12780] 00:28:53.797 bw ( KiB/s): min=33469, max=34048, per=99.95%, avg=33823.25, stdev=260.01, samples=4 00:28:53.797 iops : min= 8367, max= 8512, avg=8455.75, stdev=65.12, samples=4 00:28:53.797 lat (msec) : 2=0.01%, 4=0.05%, 10=99.64%, 20=0.30% 00:28:53.797 cpu : usr=67.75%, sys=27.22%, ctx=59, majf=0, minf=5 00:28:53.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:53.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:53.797 issued rwts: total=16987,16980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:53.797 00:28:53.797 Run status group 0 (all jobs): 00:28:53.797 READ: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=66.4MiB (69.6MB), run=2007-2007msec 00:28:53.797 WRITE: bw=33.0MiB/s (34.7MB/s), 33.0MiB/s-33.0MiB/s (34.7MB/s-34.7MB/s), io=66.3MiB (69.6MB), run=2007-2007msec 00:28:53.797 15:14:12 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:53.797 15:14:12 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:53.798 15:14:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:53.798 15:14:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:53.798 15:14:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:53.798 15:14:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:53.798 15:14:12 -- common/autotest_common.sh@1320 -- # shift 00:28:53.798 15:14:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:53.798 15:14:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:53.798 15:14:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:53.798 15:14:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:53.798 15:14:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:53.798 15:14:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:53.798 15:14:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:53.798 15:14:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:53.798 15:14:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:53.798 15:14:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:53.798 15:14:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:53.798 15:14:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:53.798 15:14:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:53.798 15:14:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:53.798 15:14:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:54.056 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:54.056 fio-3.35 00:28:54.056 Starting 1 thread 00:28:54.315 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.846 00:28:56.846 test: (groupid=0, jobs=1): err= 0: pid=3435210: Tue Jun 11 15:14:15 2024 00:28:56.846 read: IOPS=8206, BW=128MiB/s (134MB/s)(257MiB/2005msec) 00:28:56.846 slat (usec): min=3, max=133, avg= 4.22, stdev= 1.81 00:28:56.846 clat (usec): min=2858, max=24275, avg=9453.75, stdev=2563.13 00:28:56.846 lat (usec): min=2862, max=24278, avg=9457.97, stdev=2563.40 00:28:56.846 clat percentiles (usec): 00:28:56.846 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 7111], 00:28:56.846 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10028], 00:28:56.846 | 70.00th=[10683], 80.00th=[11731], 90.00th=[12911], 95.00th=[13698], 00:28:56.846 | 99.00th=[15795], 99.50th=[17433], 99.90th=[19268], 99.95th=[19792], 00:28:56.846 | 99.99th=[21365] 00:28:56.846 bw ( KiB/s): min=52096, max=77504, per=50.09%, avg=65768.00, stdev=12749.75, samples=4 00:28:56.846 iops : min= 3256, max= 4844, avg=4110.50, stdev=796.86, samples=4 00:28:56.846 write: IOPS=4754, BW=74.3MiB/s (77.9MB/s)(135MiB/1815msec); 0 zone resets 00:28:56.846 slat (usec): min=45, max=375, avg=46.95, stdev= 6.32 00:28:56.846 clat (usec): min=2789, max=19399, avg=10751.98, stdev=1986.20 00:28:56.846 lat (usec): min=2839, max=19449, avg=10798.93, stdev=1986.57 00:28:56.846 clat percentiles (usec): 00:28:56.846 | 1.00th=[ 6849], 5.00th=[ 7898], 10.00th=[ 8356], 20.00th=[ 8979], 00:28:56.846 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10683], 60.00th=[11076], 00:28:56.846 | 70.00th=[11600], 80.00th=[12256], 90.00th=[13435], 95.00th=[14484], 00:28:56.846 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16712], 99.95th=[16909], 00:28:56.846 | 99.99th=[19530] 00:28:56.846 bw ( KiB/s): min=54656, max=80288, per=90.22%, avg=68640.00, stdev=12936.53, samples=4 00:28:56.846 iops : min= 3416, max= 5018, avg=4290.00, stdev=808.53, samples=4 00:28:56.846 lat (msec) : 4=0.27%, 10=51.60%, 20=48.11%, 50=0.02% 00:28:56.847 cpu : usr=88.72%, sys=9.93%, ctx=20, majf=0, minf=2 00:28:56.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:28:56.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:56.847 issued rwts: total=16454,8630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:56.847 00:28:56.847 Run status group 0 (all jobs): 00:28:56.847 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (270MB), run=2005-2005msec 00:28:56.847 WRITE: bw=74.3MiB/s (77.9MB/s), 74.3MiB/s-74.3MiB/s (77.9MB/s-77.9MB/s), io=135MiB (141MB), run=1815-1815msec 00:28:56.847 15:14:15 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.847 15:14:15 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:56.847 15:14:15 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:56.847 15:14:15 -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:56.847 15:14:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:56.847 15:14:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:56.847 15:14:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:56.847 15:14:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:56.847 15:14:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:57.105 15:14:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:57.105 15:14:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:28:57.105 15:14:15 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 -i 10.0.0.2 00:29:00.390 Nvme0n1 00:29:00.390 15:14:18 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:02.921 15:14:21 -- host/fio.sh@53 -- # ls_guid=e1e506fc-970e-4e31-8d26-9221fb26886d 00:29:02.921 15:14:21 -- host/fio.sh@54 -- # get_lvs_free_mb e1e506fc-970e-4e31-8d26-9221fb26886d 00:29:02.921 15:14:21 -- common/autotest_common.sh@1343 -- # local lvs_uuid=e1e506fc-970e-4e31-8d26-9221fb26886d 00:29:02.921 15:14:21 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:02.921 15:14:21 -- common/autotest_common.sh@1345 -- # local fc 00:29:02.921 15:14:21 -- common/autotest_common.sh@1346 -- # local cs 00:29:02.921 15:14:21 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:03.179 15:14:21 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:03.179 { 00:29:03.179 "uuid": "e1e506fc-970e-4e31-8d26-9221fb26886d", 00:29:03.179 "name": "lvs_0", 00:29:03.179 "base_bdev": "Nvme0n1", 00:29:03.179 "total_data_clusters": 930, 00:29:03.179 "free_clusters": 930, 00:29:03.179 "block_size": 512, 00:29:03.179 "cluster_size": 1073741824 00:29:03.179 } 00:29:03.179 ]' 00:29:03.179 15:14:21 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="e1e506fc-970e-4e31-8d26-9221fb26886d") .free_clusters' 00:29:03.437 15:14:22 -- common/autotest_common.sh@1348 -- # fc=930 00:29:03.437 15:14:22 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="e1e506fc-970e-4e31-8d26-9221fb26886d") .cluster_size' 00:29:03.437 15:14:22 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:29:03.437 15:14:22 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:29:03.437 15:14:22 -- common/autotest_common.sh@1353 -- # echo 952320 00:29:03.437 952320 00:29:03.437 15:14:22 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:03.695 fffd29f5-8349-4b25-a991-62dc8a0843f7 00:29:03.695 15:14:22 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:03.953 15:14:22 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:04.211 15:14:22 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:04.470 15:14:23 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:04.470 15:14:23 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:04.470 15:14:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:04.470 15:14:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:04.470 15:14:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:04.470 15:14:23 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:04.470 15:14:23 -- common/autotest_common.sh@1320 -- # shift 00:29:04.470 15:14:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:04.470 15:14:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:04.470 15:14:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:04.470 15:14:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:04.470 15:14:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:04.470 15:14:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:04.470 15:14:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:04.470 15:14:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:04.470 15:14:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:04.470 15:14:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:04.470 15:14:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:04.470 15:14:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:04.470 15:14:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:04.470 15:14:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:04.470 15:14:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:05.036 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:05.036 fio-3.35 00:29:05.036 Starting 1 thread 00:29:05.036 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.568 00:29:07.568 test: (groupid=0, jobs=1): err= 0: pid=3437244: Tue Jun 11 15:14:25 2024 00:29:07.568 read: IOPS=5697, BW=22.3MiB/s (23.3MB/s)(44.7MiB/2008msec) 00:29:07.568 slat (usec): min=2, max=126, avg= 2.50, stdev= 1.64 00:29:07.568 clat (usec): min=1147, max=171672, avg=12440.03, stdev=11930.79 00:29:07.568 lat (usec): min=1150, max=171694, avg=12442.53, stdev=11931.04 00:29:07.568 clat percentiles (msec): 00:29:07.568 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:29:07.568 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:29:07.568 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 14], 00:29:07.568 | 99.00th=[ 14], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:29:07.568 | 99.99th=[ 171] 00:29:07.568 bw ( KiB/s): min=16120, max=25312, per=99.93%, avg=22772.00, stdev=4441.54, samples=4 00:29:07.568 iops : min= 4030, max= 6328, avg=5693.00, stdev=1110.39, samples=4 00:29:07.568 write: IOPS=5680, BW=22.2MiB/s (23.3MB/s)(44.6MiB/2008msec); 0 zone resets 00:29:07.568 slat (usec): min=2, max=119, avg= 2.63, stdev= 1.19 00:29:07.568 clat (usec): min=415, max=170088, avg=9935.02, stdev=11220.48 00:29:07.568 lat (usec): min=417, max=170094, avg=9937.64, stdev=11220.77 00:29:07.568 clat percentiles (msec): 00:29:07.568 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:29:07.568 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:29:07.568 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:29:07.568 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 171], 00:29:07.568 | 99.99th=[ 171] 00:29:07.568 bw ( KiB/s): min=17192, max=24704, per=99.76%, avg=22666.00, stdev=3652.82, samples=4 00:29:07.568 iops : min= 4298, max= 6176, avg=5666.50, stdev=913.21, samples=4 00:29:07.568 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:07.568 lat (msec) : 2=0.04%, 4=0.09%, 10=45.22%, 20=54.08%, 250=0.56% 00:29:07.568 cpu : usr=68.51%, sys=26.66%, ctx=68, majf=0, minf=5 00:29:07.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:07.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:07.568 issued rwts: total=11440,11406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.568 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:07.568 00:29:07.568 Run status group 0 (all jobs): 00:29:07.568 READ: bw=22.3MiB/s (23.3MB/s), 22.3MiB/s-22.3MiB/s (23.3MB/s-23.3MB/s), io=44.7MiB (46.9MB), run=2008-2008msec 00:29:07.568 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.6MiB (46.7MB), run=2008-2008msec 00:29:07.568 15:14:25 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:07.568 15:14:26 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:08.943 15:14:27 -- host/fio.sh@64 -- # ls_nested_guid=90e4f088-4d77-4eae-9107-f947c2964b31 00:29:08.943 15:14:27 -- host/fio.sh@65 -- # get_lvs_free_mb 90e4f088-4d77-4eae-9107-f947c2964b31 00:29:08.943 15:14:27 -- common/autotest_common.sh@1343 -- # local lvs_uuid=90e4f088-4d77-4eae-9107-f947c2964b31 00:29:08.943 15:14:27 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:08.943 15:14:27 -- common/autotest_common.sh@1345 -- # local fc 00:29:08.943 15:14:27 -- common/autotest_common.sh@1346 -- # local cs 00:29:08.943 15:14:27 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:08.943 15:14:27 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:08.943 { 00:29:08.943 "uuid": "e1e506fc-970e-4e31-8d26-9221fb26886d", 00:29:08.943 "name": "lvs_0", 00:29:08.943 "base_bdev": "Nvme0n1", 00:29:08.943 "total_data_clusters": 930, 00:29:08.943 "free_clusters": 0, 00:29:08.943 "block_size": 512, 00:29:08.943 "cluster_size": 1073741824 00:29:08.943 }, 00:29:08.943 { 00:29:08.943 "uuid": "90e4f088-4d77-4eae-9107-f947c2964b31", 00:29:08.943 "name": "lvs_n_0", 00:29:08.943 "base_bdev": "fffd29f5-8349-4b25-a991-62dc8a0843f7", 00:29:08.943 "total_data_clusters": 237847, 00:29:08.943 "free_clusters": 237847, 00:29:08.943 "block_size": 512, 00:29:08.943 "cluster_size": 4194304 00:29:08.943 } 00:29:08.943 ]' 00:29:08.943 15:14:27 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="90e4f088-4d77-4eae-9107-f947c2964b31") .free_clusters' 00:29:08.943 15:14:27 -- common/autotest_common.sh@1348 -- # fc=237847 00:29:08.943 15:14:27 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="90e4f088-4d77-4eae-9107-f947c2964b31") .cluster_size' 00:29:08.943 15:14:27 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:08.943 15:14:27 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:29:08.943 15:14:27 -- common/autotest_common.sh@1353 -- # echo 951388 00:29:08.943 951388 00:29:08.943 15:14:27 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:09.878 af570b9c-7b80-4421-b84b-ada82be2f80f 00:29:09.878 15:14:28 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:10.136 15:14:28 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:10.394 15:14:29 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:10.652 15:14:29 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:10.652 15:14:29 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:10.652 15:14:29 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:10.652 15:14:29 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:10.652 15:14:29 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:10.652 15:14:29 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:10.652 15:14:29 -- common/autotest_common.sh@1320 -- # shift 00:29:10.652 15:14:29 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:10.652 15:14:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.652 15:14:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:10.652 15:14:29 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:10.652 15:14:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:10.652 15:14:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:10.652 15:14:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:10.652 15:14:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.652 15:14:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:10.652 15:14:29 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:10.652 15:14:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:10.652 15:14:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:10.652 15:14:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:10.652 15:14:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:10.652 15:14:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:10.910 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:10.910 fio-3.35 00:29:10.910 Starting 1 thread 00:29:10.910 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.445 00:29:13.445 test: (groupid=0, jobs=1): err= 0: pid=3438449: Tue Jun 11 15:14:32 2024 00:29:13.445 read: IOPS=5402, BW=21.1MiB/s (22.1MB/s)(42.4MiB/2010msec) 00:29:13.445 slat (usec): min=2, max=129, avg= 2.53, stdev= 1.66 00:29:13.445 clat (usec): min=4670, max=20138, avg=13190.33, stdev=1073.50 00:29:13.445 lat (usec): min=4675, max=20141, avg=13192.86, stdev=1073.40 00:29:13.445 clat percentiles (usec): 00:29:13.445 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11863], 20.00th=[12387], 00:29:13.445 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:29:13.445 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:29:13.445 | 99.00th=[15533], 99.50th=[15795], 99.90th=[18220], 99.95th=[18482], 00:29:13.445 | 99.99th=[20055] 00:29:13.445 bw ( KiB/s): min=20664, max=22072, per=99.90%, avg=21590.00, stdev=654.51, samples=4 00:29:13.445 iops : min= 5166, max= 5518, avg=5397.50, stdev=163.63, samples=4 00:29:13.445 write: IOPS=5385, BW=21.0MiB/s (22.1MB/s)(42.3MiB/2010msec); 0 zone resets 00:29:13.445 slat (usec): min=2, max=111, avg= 2.64, stdev= 1.13 00:29:13.445 clat (usec): min=2248, max=16833, avg=10437.08, stdev=964.07 00:29:13.445 lat (usec): min=2256, max=16836, avg=10439.72, stdev=964.02 00:29:13.445 clat percentiles (usec): 00:29:13.445 | 1.00th=[ 8160], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:29:13.445 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:29:13.445 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:29:13.445 | 99.00th=[12649], 99.50th=[13042], 99.90th=[16319], 99.95th=[16581], 00:29:13.445 | 99.99th=[16909] 00:29:13.445 bw ( KiB/s): min=21248, max=21760, per=99.92%, avg=21526.00, stdev=253.10, samples=4 00:29:13.445 iops : min= 5312, max= 5440, avg=5381.50, stdev=63.27, samples=4 00:29:13.445 lat (msec) : 4=0.05%, 10=15.07%, 20=84.87%, 50=0.01% 00:29:13.445 cpu : usr=67.00%, sys=28.47%, ctx=48, majf=0, minf=5 00:29:13.445 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:13.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:13.445 issued rwts: total=10860,10825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:13.445 00:29:13.445 Run status group 0 (all jobs): 00:29:13.445 READ: bw=21.1MiB/s (22.1MB/s), 21.1MiB/s-21.1MiB/s (22.1MB/s-22.1MB/s), io=42.4MiB (44.5MB), run=2010-2010msec 00:29:13.445 WRITE: bw=21.0MiB/s (22.1MB/s), 21.0MiB/s-21.0MiB/s (22.1MB/s-22.1MB/s), io=42.3MiB (44.3MB), run=2010-2010msec 00:29:13.445 15:14:32 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:13.704 15:14:32 -- host/fio.sh@74 -- # sync 00:29:13.704 15:14:32 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:17.892 15:14:36 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:17.892 15:14:36 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:21.227 15:14:39 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:21.227 15:14:39 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:23.175 15:14:41 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:23.175 15:14:41 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:23.175 15:14:41 -- host/fio.sh@86 -- # nvmftestfini 00:29:23.175 15:14:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:23.175 15:14:41 -- nvmf/common.sh@116 -- # sync 00:29:23.175 15:14:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:23.175 15:14:41 -- nvmf/common.sh@119 -- # set +e 00:29:23.175 15:14:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:23.175 15:14:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:23.175 rmmod nvme_tcp 00:29:23.175 rmmod nvme_fabrics 00:29:23.175 rmmod nvme_keyring 00:29:23.175 15:14:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:23.175 15:14:41 -- nvmf/common.sh@123 -- # set -e 00:29:23.175 15:14:41 -- nvmf/common.sh@124 -- # return 0 00:29:23.175 15:14:41 -- nvmf/common.sh@477 -- # '[' -n 3433858 ']' 00:29:23.175 15:14:41 -- nvmf/common.sh@478 -- # killprocess 3433858 00:29:23.175 15:14:41 -- common/autotest_common.sh@926 -- # '[' -z 3433858 ']' 00:29:23.175 15:14:41 -- common/autotest_common.sh@930 -- # kill -0 3433858 00:29:23.175 15:14:41 -- common/autotest_common.sh@931 -- # uname 00:29:23.175 15:14:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:23.175 15:14:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3433858 00:29:23.175 15:14:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:23.175 15:14:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:23.175 15:14:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3433858' 00:29:23.175 killing process with pid 3433858 00:29:23.175 15:14:41 -- common/autotest_common.sh@945 -- # kill 3433858 00:29:23.176 15:14:41 -- common/autotest_common.sh@950 -- # wait 3433858 00:29:23.176 15:14:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:23.176 15:14:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:23.176 15:14:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:23.176 15:14:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.176 15:14:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:23.176 15:14:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.176 15:14:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.176 15:14:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.709 15:14:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:25.709 00:29:25.709 real 0m43.538s 00:29:25.709 user 3m13.329s 00:29:25.709 sys 0m9.705s 00:29:25.709 15:14:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.709 15:14:44 -- common/autotest_common.sh@10 -- # set +x 00:29:25.709 ************************************ 00:29:25.709 END TEST nvmf_fio_host 00:29:25.709 ************************************ 00:29:25.709 15:14:44 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:25.709 15:14:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:25.709 15:14:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.709 15:14:44 -- common/autotest_common.sh@10 -- # set +x 00:29:25.709 ************************************ 00:29:25.709 START TEST nvmf_failover 00:29:25.709 ************************************ 00:29:25.709 15:14:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:25.709 * Looking for test storage... 00:29:25.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.709 15:14:44 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.709 15:14:44 -- nvmf/common.sh@7 -- # uname -s 00:29:25.709 15:14:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.709 15:14:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.709 15:14:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.710 15:14:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.710 15:14:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.710 15:14:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.710 15:14:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.710 15:14:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.710 15:14:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.710 15:14:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.710 15:14:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:25.710 15:14:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:25.710 15:14:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.710 15:14:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.710 15:14:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.710 15:14:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.710 15:14:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.710 15:14:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.710 15:14:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.710 15:14:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.710 15:14:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.710 15:14:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.710 15:14:44 -- paths/export.sh@5 -- # export PATH 00:29:25.710 15:14:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.710 15:14:44 -- nvmf/common.sh@46 -- # : 0 00:29:25.710 15:14:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:25.710 15:14:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:25.710 15:14:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:25.710 15:14:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.710 15:14:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.710 15:14:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:25.710 15:14:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:25.710 15:14:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:25.710 15:14:44 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:25.710 15:14:44 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:25.710 15:14:44 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:25.710 15:14:44 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:25.710 15:14:44 -- host/failover.sh@18 -- # nvmftestinit 00:29:25.710 15:14:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:25.710 15:14:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.710 15:14:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:25.710 15:14:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:25.710 15:14:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:25.710 15:14:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.710 15:14:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.710 15:14:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.710 15:14:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:25.710 15:14:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:25.710 15:14:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:25.710 15:14:44 -- common/autotest_common.sh@10 -- # set +x 00:29:32.282 15:14:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:32.282 15:14:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:32.282 15:14:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:32.282 15:14:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:32.282 15:14:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:32.282 15:14:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:32.282 15:14:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:32.282 15:14:50 -- nvmf/common.sh@294 -- # net_devs=() 00:29:32.282 15:14:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:32.282 15:14:50 -- nvmf/common.sh@295 -- # e810=() 00:29:32.282 15:14:50 -- nvmf/common.sh@295 -- # local -ga e810 00:29:32.282 15:14:50 -- nvmf/common.sh@296 -- # x722=() 00:29:32.282 15:14:50 -- nvmf/common.sh@296 -- # local -ga x722 00:29:32.282 15:14:50 -- nvmf/common.sh@297 -- # mlx=() 00:29:32.282 15:14:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:32.282 15:14:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.282 15:14:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:32.282 15:14:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:32.282 15:14:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:32.282 15:14:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:32.282 15:14:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:32.282 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:32.282 15:14:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:32.282 15:14:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:32.282 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:32.282 15:14:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:32.282 15:14:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:32.282 15:14:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.282 15:14:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:32.282 15:14:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.282 15:14:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:32.282 Found net devices under 0000:af:00.0: cvl_0_0 00:29:32.282 15:14:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.282 15:14:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:32.282 15:14:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.282 15:14:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:32.282 15:14:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.282 15:14:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:32.282 Found net devices under 0000:af:00.1: cvl_0_1 00:29:32.282 15:14:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.282 15:14:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:32.282 15:14:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:32.282 15:14:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:32.282 15:14:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:32.282 15:14:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.282 15:14:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.282 15:14:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.282 15:14:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:32.282 15:14:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.282 15:14:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.282 15:14:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:32.282 15:14:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.282 15:14:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.282 15:14:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:32.282 15:14:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:32.282 15:14:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.282 15:14:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.282 15:14:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.282 15:14:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.282 15:14:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:32.282 15:14:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.282 15:14:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.282 15:14:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.282 15:14:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:32.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:29:32.282 00:29:32.282 --- 10.0.0.2 ping statistics --- 00:29:32.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.282 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:29:32.282 15:14:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:29:32.282 00:29:32.282 --- 10.0.0.1 ping statistics --- 00:29:32.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.282 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:29:32.282 15:14:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.282 15:14:50 -- nvmf/common.sh@410 -- # return 0 00:29:32.282 15:14:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:32.282 15:14:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.282 15:14:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:32.283 15:14:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:32.283 15:14:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.283 15:14:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:32.283 15:14:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:32.283 15:14:50 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:32.283 15:14:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:32.283 15:14:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:32.283 15:14:50 -- common/autotest_common.sh@10 -- # set +x 00:29:32.283 15:14:50 -- nvmf/common.sh@469 -- # nvmfpid=3444444 00:29:32.283 15:14:50 -- nvmf/common.sh@470 -- # waitforlisten 3444444 00:29:32.283 15:14:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:32.283 15:14:50 -- common/autotest_common.sh@819 -- # '[' -z 3444444 ']' 00:29:32.283 15:14:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.283 15:14:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:32.283 15:14:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.283 15:14:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:32.283 15:14:50 -- common/autotest_common.sh@10 -- # set +x 00:29:32.283 [2024-06-11 15:14:50.732579] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:32.283 [2024-06-11 15:14:50.732637] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.283 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.283 [2024-06-11 15:14:50.824087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.283 [2024-06-11 15:14:50.914887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:32.283 [2024-06-11 15:14:50.915044] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.283 [2024-06-11 15:14:50.915057] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.283 [2024-06-11 15:14:50.915066] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.283 [2024-06-11 15:14:50.915109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.283 [2024-06-11 15:14:50.915222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.283 [2024-06-11 15:14:50.915223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.850 15:14:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:32.850 15:14:51 -- common/autotest_common.sh@852 -- # return 0 00:29:32.850 15:14:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:32.850 15:14:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:32.850 15:14:51 -- common/autotest_common.sh@10 -- # set +x 00:29:33.109 15:14:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.109 15:14:51 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:33.109 [2024-06-11 15:14:51.922648] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.367 15:14:51 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:33.367 Malloc0 00:29:33.625 15:14:52 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:33.883 15:14:52 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.141 15:14:52 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.141 [2024-06-11 15:14:52.943921] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.141 15:14:52 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:34.400 [2024-06-11 15:14:53.192763] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:34.400 15:14:53 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:34.659 [2024-06-11 15:14:53.433603] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:34.659 15:14:53 -- host/failover.sh@31 -- # bdevperf_pid=3444900 00:29:34.659 15:14:53 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:34.659 15:14:53 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:34.659 15:14:53 -- host/failover.sh@34 -- # waitforlisten 3444900 /var/tmp/bdevperf.sock 00:29:34.659 15:14:53 -- common/autotest_common.sh@819 -- # '[' -z 3444900 ']' 00:29:34.659 15:14:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:34.659 15:14:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:34.659 15:14:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:34.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:34.659 15:14:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:34.659 15:14:53 -- common/autotest_common.sh@10 -- # set +x 00:29:36.034 15:14:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:36.034 15:14:54 -- common/autotest_common.sh@852 -- # return 0 00:29:36.034 15:14:54 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:36.034 NVMe0n1 00:29:36.034 15:14:54 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:36.291 00:29:36.291 15:14:55 -- host/failover.sh@39 -- # run_test_pid=3445212 00:29:36.291 15:14:55 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:36.291 15:14:55 -- host/failover.sh@41 -- # sleep 1 00:29:37.667 15:14:56 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.667 [2024-06-11 15:14:56.325546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.667 [2024-06-11 15:14:56.325806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.668 [2024-06-11 15:14:56.325811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.668 [2024-06-11 15:14:56.325817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504f0 is same with the state(5) to be set 00:29:37.668 15:14:56 -- host/failover.sh@45 -- # sleep 3 00:29:40.952 15:14:59 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:40.952 00:29:41.211 15:14:59 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:41.211 [2024-06-11 15:15:00.036808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.036994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.211 [2024-06-11 15:15:00.037224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.212 [2024-06-11 15:15:00.037383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151c30 is same with the state(5) to be set 00:29:41.471 15:15:00 -- host/failover.sh@50 -- # sleep 3 00:29:44.755 15:15:03 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.755 [2024-06-11 15:15:03.291911] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.755 15:15:03 -- host/failover.sh@55 -- # sleep 1 00:29:45.690 15:15:04 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:45.948 [2024-06-11 15:15:04.549740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.948 [2024-06-11 15:15:04.549876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.549986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 [2024-06-11 15:15:04.550242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152440 is same with the state(5) to be set 00:29:45.949 15:15:04 -- host/failover.sh@59 -- # wait 3445212 00:29:52.522 0 00:29:52.522 15:15:10 -- host/failover.sh@61 -- # killprocess 3444900 00:29:52.522 15:15:10 -- common/autotest_common.sh@926 -- # '[' -z 3444900 ']' 00:29:52.522 15:15:10 -- common/autotest_common.sh@930 -- # kill -0 3444900 00:29:52.522 15:15:10 -- common/autotest_common.sh@931 -- # uname 00:29:52.522 15:15:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:52.522 15:15:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3444900 00:29:52.522 15:15:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:52.522 15:15:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:52.522 15:15:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3444900' 00:29:52.522 killing process with pid 3444900 00:29:52.522 15:15:10 -- common/autotest_common.sh@945 -- # kill 3444900 00:29:52.522 15:15:10 -- common/autotest_common.sh@950 -- # wait 3444900 00:29:52.522 15:15:10 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:52.522 [2024-06-11 15:14:53.505350] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:52.522 [2024-06-11 15:14:53.505417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3444900 ] 00:29:52.522 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.522 [2024-06-11 15:14:53.597146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.522 [2024-06-11 15:14:53.683888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.522 Running I/O for 15 seconds... 00:29:52.522 [2024-06-11 15:14:56.327163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.522 [2024-06-11 15:14:56.327807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.522 [2024-06-11 15:14:56.327819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.522 [2024-06-11 15:14:56.327829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.327841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.523 [2024-06-11 15:14:56.327851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.327862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.523 [2024-06-11 15:14:56.327872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.327884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.327894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.327906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.523 [2024-06-11 15:14:56.327915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.327928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.327938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.327950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.327959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.327972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.327982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.327994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.523 [2024-06-11 15:14:56.328328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.523 [2024-06-11 15:14:56.328375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.523 [2024-06-11 15:14:56.328403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.523 [2024-06-11 15:14:56.328449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.523 [2024-06-11 15:14:56.328470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.523 [2024-06-11 15:14:56.328691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.523 [2024-06-11 15:14:56.328712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.523 [2024-06-11 15:14:56.328724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.328733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.328755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.328777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.328798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.328820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.328841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.328864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.328888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.328909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.328931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.328953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.328974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.328986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.328996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.329270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.329313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.329356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.329384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.329405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.329451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.329494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.329559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.524 [2024-06-11 15:14:56.329580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.524 [2024-06-11 15:14:56.329602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.524 [2024-06-11 15:14:56.329614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:14:56.329623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:14:56.329646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:14:56.329711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:14:56.329849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:14:56.329935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.329956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:14:56.329978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.329990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:14:56.330001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.330013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:14:56.330022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.330041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.525 [2024-06-11 15:14:56.330052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.330085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:52.525 [2024-06-11 15:14:56.330094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:52.525 [2024-06-11 15:14:56.330105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18120 len:8 PRP1 0x0 PRP2 0x0 00:29:52.525 [2024-06-11 15:14:56.330115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.330163] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22ed550 was disconnected and freed. reset controller. 00:29:52.525 [2024-06-11 15:14:56.330181] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:52.525 [2024-06-11 15:14:56.330208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.525 [2024-06-11 15:14:56.330221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.330232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.525 [2024-06-11 15:14:56.330242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.330253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.525 [2024-06-11 15:14:56.330262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.330273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.525 [2024-06-11 15:14:56.330282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:14:56.330291] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.525 [2024-06-11 15:14:56.333113] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.525 [2024-06-11 15:14:56.333145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f7550 (9): Bad file descriptor 00:29:52.525 [2024-06-11 15:14:56.403824] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:52.525 [2024-06-11 15:15:00.035954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.525 [2024-06-11 15:15:00.036002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.036016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.525 [2024-06-11 15:15:00.036033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.036051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.525 [2024-06-11 15:15:00.036062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.036072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.525 [2024-06-11 15:15:00.036082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.036093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f7550 is same with the state(5) to be set 00:29:52.525 [2024-06-11 15:15:00.037664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:15:00.037687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.037706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:15:00.037717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.037730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:15:00.037741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.037754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:15:00.037764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.037777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:15:00.037786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.037799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:15:00.037810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.037823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:15:00.037833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.525 [2024-06-11 15:15:00.037847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.525 [2024-06-11 15:15:00.037857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.037869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.037879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.037892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.037902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.037919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.037930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.037942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.037953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.037965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.037975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.037988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.037998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.526 [2024-06-11 15:15:00.038576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.526 [2024-06-11 15:15:00.038589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.038665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.038754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.038801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.038989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.038998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.039047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.039116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.039138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.039160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.039271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-06-11 15:15:00.039360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.039382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.039404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.039426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.039449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.527 [2024-06-11 15:15:00.039470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.527 [2024-06-11 15:15:00.039483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.039604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.039628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.039759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.039803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.039824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.039964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.039986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.039998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.040060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.040264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.040309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.528 [2024-06-11 15:15:00.040331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.040352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-06-11 15:15:00.040375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-06-11 15:15:00.040387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.529 [2024-06-11 15:15:00.040397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:00.040409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:00.040419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:00.040431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:00.040441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:00.040453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:00.040463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:00.040478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:00.040487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:00.040499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:00.040510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:00.040522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:00.040532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:00.040544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:00.040553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:00.040565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2303aa0 is same with the state(5) to be set 00:29:52.529 [2024-06-11 15:15:00.040577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:52.529 [2024-06-11 15:15:00.040586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:52.529 [2024-06-11 15:15:00.040595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115560 len:8 PRP1 0x0 PRP2 0x0 00:29:52.529 [2024-06-11 15:15:00.040605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:00.040653] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2303aa0 was disconnected and freed. reset controller. 00:29:52.529 [2024-06-11 15:15:00.040667] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:52.529 [2024-06-11 15:15:00.040677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.529 [2024-06-11 15:15:00.043518] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.529 [2024-06-11 15:15:00.043552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f7550 (9): Bad file descriptor 00:29:52.529 [2024-06-11 15:15:00.078802] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:52.529 [2024-06-11 15:15:04.550435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.550982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.550992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.551004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.551015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.551034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.551045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.551057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.551067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.551079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.551089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.529 [2024-06-11 15:15:04.551101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.529 [2024-06-11 15:15:04.551111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.530 [2024-06-11 15:15:04.551935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.530 [2024-06-11 15:15:04.551968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.530 [2024-06-11 15:15:04.551978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.551990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.531 [2024-06-11 15:15:04.552001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.531 [2024-06-11 15:15:04.552470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.531 [2024-06-11 15:15:04.552537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.531 [2024-06-11 15:15:04.552581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.531 [2024-06-11 15:15:04.552625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.531 [2024-06-11 15:15:04.552691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.531 [2024-06-11 15:15:04.552718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.531 [2024-06-11 15:15:04.552808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.531 [2024-06-11 15:15:04.552831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.531 [2024-06-11 15:15:04.552887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.531 [2024-06-11 15:15:04.552897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.552909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.552919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.552931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.532 [2024-06-11 15:15:04.552941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.552954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.552964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.552975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.552985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.532 [2024-06-11 15:15:04.553162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.532 [2024-06-11 15:15:04.553235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.532 [2024-06-11 15:15:04.553258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.532 [2024-06-11 15:15:04.553329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.532 [2024-06-11 15:15:04.553351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.532 [2024-06-11 15:15:04.553512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f3320 is same with the state(5) to be set 00:29:52.532 [2024-06-11 15:15:04.553536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:52.532 [2024-06-11 15:15:04.553545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:52.532 [2024-06-11 15:15:04.553554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15976 len:8 PRP1 0x0 PRP2 0x0 00:29:52.532 [2024-06-11 15:15:04.553564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553614] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22f3320 was disconnected and freed. reset controller. 00:29:52.532 [2024-06-11 15:15:04.553627] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:52.532 [2024-06-11 15:15:04.553652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.532 [2024-06-11 15:15:04.553666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.532 [2024-06-11 15:15:04.553686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.532 [2024-06-11 15:15:04.553706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.532 [2024-06-11 15:15:04.553726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.532 [2024-06-11 15:15:04.553736] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.532 [2024-06-11 15:15:04.556538] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.532 [2024-06-11 15:15:04.556571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f7550 (9): Bad file descriptor 00:29:52.532 [2024-06-11 15:15:04.673002] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:52.532 00:29:52.532 Latency(us) 00:29:52.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.532 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:52.532 Verification LBA range: start 0x0 length 0x4000 00:29:52.532 NVMe0n1 : 15.01 11718.28 45.77 710.35 0.00 10278.51 744.73 14596.65 00:29:52.532 =================================================================================================================== 00:29:52.532 Total : 11718.28 45.77 710.35 0.00 10278.51 744.73 14596.65 00:29:52.532 Received shutdown signal, test time was about 15.000000 seconds 00:29:52.532 00:29:52.532 Latency(us) 00:29:52.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.532 =================================================================================================================== 00:29:52.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:52.532 15:15:10 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:52.532 15:15:10 -- host/failover.sh@65 -- # count=3 00:29:52.532 15:15:10 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:52.533 15:15:10 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:52.533 15:15:10 -- host/failover.sh@73 -- # bdevperf_pid=3448548 00:29:52.533 15:15:10 -- host/failover.sh@75 -- # waitforlisten 3448548 /var/tmp/bdevperf.sock 00:29:52.533 15:15:10 -- common/autotest_common.sh@819 -- # '[' -z 3448548 ']' 00:29:52.533 15:15:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:52.533 15:15:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:52.533 15:15:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:52.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:52.533 15:15:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:52.533 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:29:52.792 15:15:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:52.792 15:15:11 -- common/autotest_common.sh@852 -- # return 0 00:29:52.792 15:15:11 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:53.050 [2024-06-11 15:15:11.735023] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:53.050 15:15:11 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:53.309 [2024-06-11 15:15:11.979826] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:53.309 15:15:12 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:53.568 NVMe0n1 00:29:53.568 15:15:12 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:54.135 00:29:54.135 15:15:12 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:54.393 00:29:54.393 15:15:13 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:54.393 15:15:13 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:54.653 15:15:13 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:54.912 15:15:13 -- host/failover.sh@87 -- # sleep 3 00:29:58.200 15:15:16 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:58.200 15:15:16 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:58.200 15:15:16 -- host/failover.sh@90 -- # run_test_pid=3449665 00:29:58.200 15:15:16 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:58.200 15:15:16 -- host/failover.sh@92 -- # wait 3449665 00:29:59.579 0 00:29:59.579 15:15:18 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:59.579 [2024-06-11 15:15:10.576458] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:59.579 [2024-06-11 15:15:10.576526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3448548 ] 00:29:59.579 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.579 [2024-06-11 15:15:10.667531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.579 [2024-06-11 15:15:10.747825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.579 [2024-06-11 15:15:13.676476] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:59.579 [2024-06-11 15:15:13.676530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.579 [2024-06-11 15:15:13.676545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.579 [2024-06-11 15:15:13.676557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.579 [2024-06-11 15:15:13.676568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.579 [2024-06-11 15:15:13.676578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.579 [2024-06-11 15:15:13.676589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.579 [2024-06-11 15:15:13.676600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.579 [2024-06-11 15:15:13.676610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.579 [2024-06-11 15:15:13.676620] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.579 [2024-06-11 15:15:13.676648] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.579 [2024-06-11 15:15:13.676666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc88550 (9): Bad file descriptor 00:29:59.579 [2024-06-11 15:15:13.686681] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:59.579 Running I/O for 1 seconds... 00:29:59.579 00:29:59.579 Latency(us) 00:29:59.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.579 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:59.579 Verification LBA range: start 0x0 length 0x4000 00:29:59.579 NVMe0n1 : 1.01 11507.64 44.95 0.00 0.00 11068.78 1407.53 14060.45 00:29:59.579 =================================================================================================================== 00:29:59.579 Total : 11507.64 44.95 0.00 0.00 11068.78 1407.53 14060.45 00:29:59.579 15:15:18 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:59.579 15:15:18 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:59.579 15:15:18 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:59.838 15:15:18 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:59.838 15:15:18 -- host/failover.sh@99 -- # grep -q NVMe0 00:30:00.097 15:15:18 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:00.356 15:15:19 -- host/failover.sh@101 -- # sleep 3 00:30:03.687 15:15:22 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:03.687 15:15:22 -- host/failover.sh@103 -- # grep -q NVMe0 00:30:03.687 15:15:22 -- host/failover.sh@108 -- # killprocess 3448548 00:30:03.687 15:15:22 -- common/autotest_common.sh@926 -- # '[' -z 3448548 ']' 00:30:03.687 15:15:22 -- common/autotest_common.sh@930 -- # kill -0 3448548 00:30:03.687 15:15:22 -- common/autotest_common.sh@931 -- # uname 00:30:03.687 15:15:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:03.687 15:15:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3448548 00:30:03.687 15:15:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:03.687 15:15:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:03.687 15:15:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3448548' 00:30:03.687 killing process with pid 3448548 00:30:03.687 15:15:22 -- common/autotest_common.sh@945 -- # kill 3448548 00:30:03.687 15:15:22 -- common/autotest_common.sh@950 -- # wait 3448548 00:30:03.956 15:15:22 -- host/failover.sh@110 -- # sync 00:30:03.956 15:15:22 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:04.215 15:15:22 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:04.215 15:15:22 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:04.215 15:15:22 -- host/failover.sh@116 -- # nvmftestfini 00:30:04.215 15:15:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:04.215 15:15:22 -- nvmf/common.sh@116 -- # sync 00:30:04.215 15:15:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:04.215 15:15:22 -- nvmf/common.sh@119 -- # set +e 00:30:04.215 15:15:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:04.215 15:15:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:04.215 rmmod nvme_tcp 00:30:04.215 rmmod nvme_fabrics 00:30:04.215 rmmod nvme_keyring 00:30:04.215 15:15:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:04.215 15:15:22 -- nvmf/common.sh@123 -- # set -e 00:30:04.215 15:15:22 -- nvmf/common.sh@124 -- # return 0 00:30:04.215 15:15:22 -- nvmf/common.sh@477 -- # '[' -n 3444444 ']' 00:30:04.215 15:15:22 -- nvmf/common.sh@478 -- # killprocess 3444444 00:30:04.215 15:15:22 -- common/autotest_common.sh@926 -- # '[' -z 3444444 ']' 00:30:04.215 15:15:22 -- common/autotest_common.sh@930 -- # kill -0 3444444 00:30:04.215 15:15:22 -- common/autotest_common.sh@931 -- # uname 00:30:04.215 15:15:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:04.215 15:15:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3444444 00:30:04.215 15:15:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:04.215 15:15:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:04.215 15:15:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3444444' 00:30:04.215 killing process with pid 3444444 00:30:04.215 15:15:22 -- common/autotest_common.sh@945 -- # kill 3444444 00:30:04.215 15:15:22 -- common/autotest_common.sh@950 -- # wait 3444444 00:30:04.473 15:15:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:04.473 15:15:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:04.473 15:15:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:04.473 15:15:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:04.473 15:15:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:04.473 15:15:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.473 15:15:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.473 15:15:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.006 15:15:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:07.006 00:30:07.006 real 0m41.244s 00:30:07.006 user 2m12.411s 00:30:07.006 sys 0m8.385s 00:30:07.006 15:15:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.006 15:15:25 -- common/autotest_common.sh@10 -- # set +x 00:30:07.006 ************************************ 00:30:07.006 END TEST nvmf_failover 00:30:07.006 ************************************ 00:30:07.006 15:15:25 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:07.006 15:15:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:07.006 15:15:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:07.006 15:15:25 -- common/autotest_common.sh@10 -- # set +x 00:30:07.006 ************************************ 00:30:07.006 START TEST nvmf_discovery 00:30:07.006 ************************************ 00:30:07.006 15:15:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:07.006 * Looking for test storage... 00:30:07.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:07.006 15:15:25 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.006 15:15:25 -- nvmf/common.sh@7 -- # uname -s 00:30:07.006 15:15:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.006 15:15:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.006 15:15:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.006 15:15:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.006 15:15:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.006 15:15:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.006 15:15:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.006 15:15:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.006 15:15:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.006 15:15:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.006 15:15:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:07.006 15:15:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:07.006 15:15:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.006 15:15:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.006 15:15:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.006 15:15:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.006 15:15:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.006 15:15:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.006 15:15:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.006 15:15:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.006 15:15:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.006 15:15:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.006 15:15:25 -- paths/export.sh@5 -- # export PATH 00:30:07.006 15:15:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.006 15:15:25 -- nvmf/common.sh@46 -- # : 0 00:30:07.006 15:15:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:07.006 15:15:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:07.006 15:15:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:07.006 15:15:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.006 15:15:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.006 15:15:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:07.006 15:15:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:07.006 15:15:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:07.006 15:15:25 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:07.006 15:15:25 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:07.006 15:15:25 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:07.006 15:15:25 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:07.006 15:15:25 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:07.006 15:15:25 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:07.006 15:15:25 -- host/discovery.sh@25 -- # nvmftestinit 00:30:07.006 15:15:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:07.006 15:15:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.006 15:15:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:07.006 15:15:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:07.006 15:15:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:07.006 15:15:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.006 15:15:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:07.006 15:15:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.006 15:15:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:07.006 15:15:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:07.006 15:15:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:07.006 15:15:25 -- common/autotest_common.sh@10 -- # set +x 00:30:13.574 15:15:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:13.574 15:15:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:13.574 15:15:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:13.574 15:15:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:13.574 15:15:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:13.574 15:15:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:13.574 15:15:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:13.574 15:15:31 -- nvmf/common.sh@294 -- # net_devs=() 00:30:13.574 15:15:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:13.574 15:15:31 -- nvmf/common.sh@295 -- # e810=() 00:30:13.574 15:15:31 -- nvmf/common.sh@295 -- # local -ga e810 00:30:13.574 15:15:31 -- nvmf/common.sh@296 -- # x722=() 00:30:13.574 15:15:31 -- nvmf/common.sh@296 -- # local -ga x722 00:30:13.574 15:15:31 -- nvmf/common.sh@297 -- # mlx=() 00:30:13.574 15:15:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:13.574 15:15:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.574 15:15:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:13.574 15:15:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:13.574 15:15:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:13.574 15:15:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:13.574 15:15:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:13.574 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:13.574 15:15:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:13.574 15:15:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:13.574 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:13.574 15:15:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:13.574 15:15:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:13.574 15:15:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.574 15:15:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:13.574 15:15:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.574 15:15:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:13.574 Found net devices under 0000:af:00.0: cvl_0_0 00:30:13.574 15:15:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.574 15:15:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:13.574 15:15:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.574 15:15:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:13.574 15:15:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.574 15:15:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:13.574 Found net devices under 0000:af:00.1: cvl_0_1 00:30:13.574 15:15:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.574 15:15:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:13.574 15:15:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:13.574 15:15:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:13.574 15:15:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:13.575 15:15:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:13.575 15:15:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.575 15:15:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.575 15:15:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.575 15:15:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:13.575 15:15:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.575 15:15:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.575 15:15:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:13.575 15:15:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.575 15:15:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.575 15:15:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:13.575 15:15:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:13.575 15:15:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.575 15:15:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.575 15:15:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.575 15:15:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.575 15:15:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:13.575 15:15:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.575 15:15:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.575 15:15:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.575 15:15:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:13.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:30:13.575 00:30:13.575 --- 10.0.0.2 ping statistics --- 00:30:13.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.575 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:13.575 15:15:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:30:13.575 00:30:13.575 --- 10.0.0.1 ping statistics --- 00:30:13.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.575 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:30:13.575 15:15:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.575 15:15:31 -- nvmf/common.sh@410 -- # return 0 00:30:13.575 15:15:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:13.575 15:15:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.575 15:15:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:13.575 15:15:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:13.575 15:15:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.575 15:15:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:13.575 15:15:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:13.575 15:15:31 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:13.575 15:15:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:13.575 15:15:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:13.575 15:15:31 -- common/autotest_common.sh@10 -- # set +x 00:30:13.575 15:15:31 -- nvmf/common.sh@469 -- # nvmfpid=3454764 00:30:13.575 15:15:31 -- nvmf/common.sh@470 -- # waitforlisten 3454764 00:30:13.575 15:15:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:13.575 15:15:31 -- common/autotest_common.sh@819 -- # '[' -z 3454764 ']' 00:30:13.575 15:15:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.575 15:15:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:13.575 15:15:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.575 15:15:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:13.575 15:15:31 -- common/autotest_common.sh@10 -- # set +x 00:30:13.575 [2024-06-11 15:15:31.998643] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:13.575 [2024-06-11 15:15:31.998698] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.575 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.575 [2024-06-11 15:15:32.086556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.575 [2024-06-11 15:15:32.170718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:13.575 [2024-06-11 15:15:32.170865] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.575 [2024-06-11 15:15:32.170876] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.575 [2024-06-11 15:15:32.170886] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.575 [2024-06-11 15:15:32.170915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.140 15:15:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:14.140 15:15:32 -- common/autotest_common.sh@852 -- # return 0 00:30:14.140 15:15:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:14.140 15:15:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:14.140 15:15:32 -- common/autotest_common.sh@10 -- # set +x 00:30:14.140 15:15:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.140 15:15:32 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:14.140 15:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.140 15:15:32 -- common/autotest_common.sh@10 -- # set +x 00:30:14.140 [2024-06-11 15:15:32.964167] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.140 15:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.140 15:15:32 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:14.140 15:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.140 15:15:32 -- common/autotest_common.sh@10 -- # set +x 00:30:14.140 [2024-06-11 15:15:32.972329] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:14.140 15:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.140 15:15:32 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:14.140 15:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.140 15:15:32 -- common/autotest_common.sh@10 -- # set +x 00:30:14.399 null0 00:30:14.399 15:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.399 15:15:32 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:14.399 15:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.399 15:15:32 -- common/autotest_common.sh@10 -- # set +x 00:30:14.399 null1 00:30:14.399 15:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.399 15:15:32 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:14.399 15:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.399 15:15:32 -- common/autotest_common.sh@10 -- # set +x 00:30:14.399 15:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.399 15:15:33 -- host/discovery.sh@45 -- # hostpid=3454827 00:30:14.399 15:15:33 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:14.399 15:15:33 -- host/discovery.sh@46 -- # waitforlisten 3454827 /tmp/host.sock 00:30:14.399 15:15:33 -- common/autotest_common.sh@819 -- # '[' -z 3454827 ']' 00:30:14.399 15:15:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:30:14.399 15:15:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:14.399 15:15:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:14.399 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:14.399 15:15:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:14.399 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:30:14.399 [2024-06-11 15:15:33.045822] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:14.399 [2024-06-11 15:15:33.045875] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454827 ] 00:30:14.399 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.399 [2024-06-11 15:15:33.135605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.399 [2024-06-11 15:15:33.222262] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:14.399 [2024-06-11 15:15:33.222413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.337 15:15:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:15.337 15:15:33 -- common/autotest_common.sh@852 -- # return 0 00:30:15.337 15:15:33 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:15.337 15:15:33 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:15.337 15:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.337 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:30:15.337 15:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.337 15:15:33 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:15.337 15:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.337 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:30:15.337 15:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.337 15:15:33 -- host/discovery.sh@72 -- # notify_id=0 00:30:15.337 15:15:33 -- host/discovery.sh@78 -- # get_subsystem_names 00:30:15.337 15:15:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:15.337 15:15:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:15.337 15:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.337 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:30:15.337 15:15:33 -- host/discovery.sh@59 -- # sort 00:30:15.337 15:15:33 -- host/discovery.sh@59 -- # xargs 00:30:15.337 15:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.337 15:15:33 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:30:15.337 15:15:33 -- host/discovery.sh@79 -- # get_bdev_list 00:30:15.337 15:15:33 -- host/discovery.sh@55 -- # sort 00:30:15.337 15:15:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.337 15:15:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:15.337 15:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.337 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:30:15.337 15:15:33 -- host/discovery.sh@55 -- # xargs 00:30:15.337 15:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.337 15:15:34 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:30:15.337 15:15:34 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:15.337 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.337 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.337 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.337 15:15:34 -- host/discovery.sh@82 -- # get_subsystem_names 00:30:15.337 15:15:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:15.337 15:15:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:15.337 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.337 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.337 15:15:34 -- host/discovery.sh@59 -- # sort 00:30:15.337 15:15:34 -- host/discovery.sh@59 -- # xargs 00:30:15.337 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.337 15:15:34 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:30:15.337 15:15:34 -- host/discovery.sh@83 -- # get_bdev_list 00:30:15.337 15:15:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.337 15:15:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:15.337 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.337 15:15:34 -- host/discovery.sh@55 -- # sort 00:30:15.337 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.337 15:15:34 -- host/discovery.sh@55 -- # xargs 00:30:15.337 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.337 15:15:34 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:15.337 15:15:34 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:15.337 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.337 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.337 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.337 15:15:34 -- host/discovery.sh@86 -- # get_subsystem_names 00:30:15.337 15:15:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:15.337 15:15:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:15.337 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.337 15:15:34 -- host/discovery.sh@59 -- # sort 00:30:15.337 15:15:34 -- host/discovery.sh@59 -- # xargs 00:30:15.337 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.337 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.596 15:15:34 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:30:15.596 15:15:34 -- host/discovery.sh@87 -- # get_bdev_list 00:30:15.596 15:15:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.596 15:15:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:15.596 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.596 15:15:34 -- host/discovery.sh@55 -- # sort 00:30:15.596 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.596 15:15:34 -- host/discovery.sh@55 -- # xargs 00:30:15.596 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.596 15:15:34 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:15.596 15:15:34 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:15.596 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.596 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.596 [2024-06-11 15:15:34.263876] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.596 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.596 15:15:34 -- host/discovery.sh@92 -- # get_subsystem_names 00:30:15.596 15:15:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:15.596 15:15:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:15.596 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.596 15:15:34 -- host/discovery.sh@59 -- # sort 00:30:15.596 15:15:34 -- host/discovery.sh@59 -- # xargs 00:30:15.596 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.596 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.596 15:15:34 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:15.596 15:15:34 -- host/discovery.sh@93 -- # get_bdev_list 00:30:15.596 15:15:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.596 15:15:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:15.596 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.596 15:15:34 -- host/discovery.sh@55 -- # sort 00:30:15.596 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.596 15:15:34 -- host/discovery.sh@55 -- # xargs 00:30:15.596 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.596 15:15:34 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:30:15.596 15:15:34 -- host/discovery.sh@94 -- # get_notification_count 00:30:15.596 15:15:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:15.597 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.597 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.597 15:15:34 -- host/discovery.sh@74 -- # jq '. | length' 00:30:15.597 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.597 15:15:34 -- host/discovery.sh@74 -- # notification_count=0 00:30:15.597 15:15:34 -- host/discovery.sh@75 -- # notify_id=0 00:30:15.597 15:15:34 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:30:15.597 15:15:34 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:15.597 15:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.597 15:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.597 15:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.597 15:15:34 -- host/discovery.sh@100 -- # sleep 1 00:30:16.165 [2024-06-11 15:15:34.975244] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:16.165 [2024-06-11 15:15:34.975267] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:16.165 [2024-06-11 15:15:34.975286] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:16.424 [2024-06-11 15:15:35.063599] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:16.683 [2024-06-11 15:15:35.286030] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:16.683 [2024-06-11 15:15:35.286053] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:16.683 15:15:35 -- host/discovery.sh@101 -- # get_subsystem_names 00:30:16.683 15:15:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:16.683 15:15:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:16.683 15:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.683 15:15:35 -- common/autotest_common.sh@10 -- # set +x 00:30:16.683 15:15:35 -- host/discovery.sh@59 -- # sort 00:30:16.683 15:15:35 -- host/discovery.sh@59 -- # xargs 00:30:16.683 15:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.683 15:15:35 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.683 15:15:35 -- host/discovery.sh@102 -- # get_bdev_list 00:30:16.683 15:15:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:16.683 15:15:35 -- host/discovery.sh@55 -- # xargs 00:30:16.683 15:15:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:16.683 15:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.683 15:15:35 -- host/discovery.sh@55 -- # sort 00:30:16.683 15:15:35 -- common/autotest_common.sh@10 -- # set +x 00:30:16.683 15:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.942 15:15:35 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:16.942 15:15:35 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:30:16.942 15:15:35 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:16.942 15:15:35 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:16.942 15:15:35 -- host/discovery.sh@63 -- # sort -n 00:30:16.942 15:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.942 15:15:35 -- host/discovery.sh@63 -- # xargs 00:30:16.942 15:15:35 -- common/autotest_common.sh@10 -- # set +x 00:30:16.942 15:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.942 15:15:35 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:30:16.942 15:15:35 -- host/discovery.sh@104 -- # get_notification_count 00:30:16.942 15:15:35 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:16.942 15:15:35 -- host/discovery.sh@74 -- # jq '. | length' 00:30:16.942 15:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.942 15:15:35 -- common/autotest_common.sh@10 -- # set +x 00:30:16.942 15:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.942 15:15:35 -- host/discovery.sh@74 -- # notification_count=1 00:30:16.942 15:15:35 -- host/discovery.sh@75 -- # notify_id=1 00:30:16.942 15:15:35 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:30:16.942 15:15:35 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:16.942 15:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.942 15:15:35 -- common/autotest_common.sh@10 -- # set +x 00:30:16.942 15:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.942 15:15:35 -- host/discovery.sh@109 -- # sleep 1 00:30:17.879 15:15:36 -- host/discovery.sh@110 -- # get_bdev_list 00:30:17.879 15:15:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.879 15:15:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:17.879 15:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.879 15:15:36 -- host/discovery.sh@55 -- # sort 00:30:17.879 15:15:36 -- common/autotest_common.sh@10 -- # set +x 00:30:17.879 15:15:36 -- host/discovery.sh@55 -- # xargs 00:30:17.880 15:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.880 15:15:36 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:17.880 15:15:36 -- host/discovery.sh@111 -- # get_notification_count 00:30:17.880 15:15:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:17.880 15:15:36 -- host/discovery.sh@74 -- # jq '. | length' 00:30:17.880 15:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.880 15:15:36 -- common/autotest_common.sh@10 -- # set +x 00:30:17.880 15:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.139 15:15:36 -- host/discovery.sh@74 -- # notification_count=1 00:30:18.139 15:15:36 -- host/discovery.sh@75 -- # notify_id=2 00:30:18.139 15:15:36 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:30:18.139 15:15:36 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:18.139 15:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.139 15:15:36 -- common/autotest_common.sh@10 -- # set +x 00:30:18.139 [2024-06-11 15:15:36.759224] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:18.139 [2024-06-11 15:15:36.760383] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:18.139 [2024-06-11 15:15:36.760412] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:18.139 15:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.139 15:15:36 -- host/discovery.sh@117 -- # sleep 1 00:30:18.139 [2024-06-11 15:15:36.846684] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:18.139 [2024-06-11 15:15:36.950566] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:18.139 [2024-06-11 15:15:36.950586] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:18.139 [2024-06-11 15:15:36.950593] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:19.076 15:15:37 -- host/discovery.sh@118 -- # get_subsystem_names 00:30:19.076 15:15:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:19.076 15:15:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:19.076 15:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.076 15:15:37 -- host/discovery.sh@59 -- # sort 00:30:19.076 15:15:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.076 15:15:37 -- host/discovery.sh@59 -- # xargs 00:30:19.076 15:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.076 15:15:37 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.076 15:15:37 -- host/discovery.sh@119 -- # get_bdev_list 00:30:19.076 15:15:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:19.076 15:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.076 15:15:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.076 15:15:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:19.076 15:15:37 -- host/discovery.sh@55 -- # sort 00:30:19.076 15:15:37 -- host/discovery.sh@55 -- # xargs 00:30:19.076 15:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.076 15:15:37 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:19.076 15:15:37 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:30:19.076 15:15:37 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:19.076 15:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.076 15:15:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.076 15:15:37 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:19.076 15:15:37 -- host/discovery.sh@63 -- # sort -n 00:30:19.076 15:15:37 -- host/discovery.sh@63 -- # xargs 00:30:19.076 15:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.336 15:15:37 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:19.336 15:15:37 -- host/discovery.sh@121 -- # get_notification_count 00:30:19.336 15:15:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:19.336 15:15:37 -- host/discovery.sh@74 -- # jq '. | length' 00:30:19.336 15:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.336 15:15:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.336 15:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.336 15:15:37 -- host/discovery.sh@74 -- # notification_count=0 00:30:19.336 15:15:37 -- host/discovery.sh@75 -- # notify_id=2 00:30:19.336 15:15:37 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:30:19.336 15:15:37 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.336 15:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.336 15:15:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.336 [2024-06-11 15:15:37.979577] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:19.336 [2024-06-11 15:15:37.979603] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:19.336 [2024-06-11 15:15:37.980089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.336 [2024-06-11 15:15:37.980110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.336 [2024-06-11 15:15:37.980122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.336 [2024-06-11 15:15:37.980132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.336 [2024-06-11 15:15:37.980142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.336 [2024-06-11 15:15:37.980151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.336 [2024-06-11 15:15:37.980162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.336 [2024-06-11 15:15:37.980171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.336 [2024-06-11 15:15:37.980181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.336 15:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.336 15:15:37 -- host/discovery.sh@127 -- # sleep 1 00:30:19.336 [2024-06-11 15:15:37.990096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.336 [2024-06-11 15:15:38.000141] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.336 [2024-06-11 15:15:38.000521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.336 [2024-06-11 15:15:38.000813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.336 [2024-06-11 15:15:38.000829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.336 [2024-06-11 15:15:38.000840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.336 [2024-06-11 15:15:38.000856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.336 [2024-06-11 15:15:38.000879] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.336 [2024-06-11 15:15:38.000890] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.336 [2024-06-11 15:15:38.000901] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.336 [2024-06-11 15:15:38.000918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.336 [2024-06-11 15:15:38.010204] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.336 [2024-06-11 15:15:38.010475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.336 [2024-06-11 15:15:38.010822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.336 [2024-06-11 15:15:38.010838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.336 [2024-06-11 15:15:38.010853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.336 [2024-06-11 15:15:38.010868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.336 [2024-06-11 15:15:38.010890] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.336 [2024-06-11 15:15:38.010900] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.336 [2024-06-11 15:15:38.010910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.336 [2024-06-11 15:15:38.010925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.336 [2024-06-11 15:15:38.020264] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.336 [2024-06-11 15:15:38.020619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.336 [2024-06-11 15:15:38.020905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.336 [2024-06-11 15:15:38.020920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.336 [2024-06-11 15:15:38.020931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.336 [2024-06-11 15:15:38.020946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.336 [2024-06-11 15:15:38.020960] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.336 [2024-06-11 15:15:38.020969] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.336 [2024-06-11 15:15:38.020979] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.336 [2024-06-11 15:15:38.021001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.336 [2024-06-11 15:15:38.030323] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.336 [2024-06-11 15:15:38.030706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.336 [2024-06-11 15:15:38.031018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.336 [2024-06-11 15:15:38.031040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.336 [2024-06-11 15:15:38.031050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.336 [2024-06-11 15:15:38.031066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.336 [2024-06-11 15:15:38.031105] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.336 [2024-06-11 15:15:38.031116] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.336 [2024-06-11 15:15:38.031127] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.336 [2024-06-11 15:15:38.031141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.336 [2024-06-11 15:15:38.040386] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.336 [2024-06-11 15:15:38.040748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.336 [2024-06-11 15:15:38.041008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.336 [2024-06-11 15:15:38.041023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.336 [2024-06-11 15:15:38.041039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.336 [2024-06-11 15:15:38.041060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.336 [2024-06-11 15:15:38.041074] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.336 [2024-06-11 15:15:38.041083] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.337 [2024-06-11 15:15:38.041093] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.337 [2024-06-11 15:15:38.041106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.337 [2024-06-11 15:15:38.050450] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.337 [2024-06-11 15:15:38.050827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.051083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.051099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.337 [2024-06-11 15:15:38.051110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.337 [2024-06-11 15:15:38.051125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.337 [2024-06-11 15:15:38.051146] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.337 [2024-06-11 15:15:38.051155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.337 [2024-06-11 15:15:38.051165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.337 [2024-06-11 15:15:38.051179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.337 [2024-06-11 15:15:38.060509] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.337 [2024-06-11 15:15:38.060886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.061085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.061101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.337 [2024-06-11 15:15:38.061112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.337 [2024-06-11 15:15:38.061127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.337 [2024-06-11 15:15:38.061140] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.337 [2024-06-11 15:15:38.061149] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.337 [2024-06-11 15:15:38.061159] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.337 [2024-06-11 15:15:38.061172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.337 [2024-06-11 15:15:38.070568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.337 [2024-06-11 15:15:38.070958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.071270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.071286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.337 [2024-06-11 15:15:38.071296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.337 [2024-06-11 15:15:38.071312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.337 [2024-06-11 15:15:38.071346] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.337 [2024-06-11 15:15:38.071357] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.337 [2024-06-11 15:15:38.071368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.337 [2024-06-11 15:15:38.071382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.337 [2024-06-11 15:15:38.080634] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.337 [2024-06-11 15:15:38.080923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.081265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.081280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.337 [2024-06-11 15:15:38.081291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.337 [2024-06-11 15:15:38.081306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.337 [2024-06-11 15:15:38.081320] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.337 [2024-06-11 15:15:38.081328] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.337 [2024-06-11 15:15:38.081338] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.337 [2024-06-11 15:15:38.081352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.337 [2024-06-11 15:15:38.090694] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.337 [2024-06-11 15:15:38.091071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.091412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.091426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.337 [2024-06-11 15:15:38.091436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.337 [2024-06-11 15:15:38.091451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.337 [2024-06-11 15:15:38.091481] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.337 [2024-06-11 15:15:38.091491] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.337 [2024-06-11 15:15:38.091502] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.337 [2024-06-11 15:15:38.091517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.337 [2024-06-11 15:15:38.100752] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.337 [2024-06-11 15:15:38.101054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.101395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.337 [2024-06-11 15:15:38.101410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6eb70 with addr=10.0.0.2, port=4420 00:30:19.337 [2024-06-11 15:15:38.101420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6eb70 is same with the state(5) to be set 00:30:19.337 [2024-06-11 15:15:38.101435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6eb70 (9): Bad file descriptor 00:30:19.337 [2024-06-11 15:15:38.101449] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:19.337 [2024-06-11 15:15:38.101462] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:19.337 [2024-06-11 15:15:38.101472] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:19.337 [2024-06-11 15:15:38.101486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.337 [2024-06-11 15:15:38.107542] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:19.337 [2024-06-11 15:15:38.107563] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:20.274 15:15:38 -- host/discovery.sh@128 -- # get_subsystem_names 00:30:20.274 15:15:38 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.274 15:15:38 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.274 15:15:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.274 15:15:38 -- host/discovery.sh@59 -- # sort 00:30:20.274 15:15:38 -- common/autotest_common.sh@10 -- # set +x 00:30:20.274 15:15:38 -- host/discovery.sh@59 -- # xargs 00:30:20.275 15:15:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.275 15:15:39 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.275 15:15:39 -- host/discovery.sh@129 -- # get_bdev_list 00:30:20.275 15:15:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.275 15:15:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:20.275 15:15:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.275 15:15:39 -- host/discovery.sh@55 -- # sort 00:30:20.275 15:15:39 -- common/autotest_common.sh@10 -- # set +x 00:30:20.275 15:15:39 -- host/discovery.sh@55 -- # xargs 00:30:20.275 15:15:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.275 15:15:39 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:20.275 15:15:39 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:30:20.275 15:15:39 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:20.275 15:15:39 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:20.275 15:15:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.275 15:15:39 -- host/discovery.sh@63 -- # sort -n 00:30:20.275 15:15:39 -- common/autotest_common.sh@10 -- # set +x 00:30:20.275 15:15:39 -- host/discovery.sh@63 -- # xargs 00:30:20.275 15:15:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.534 15:15:39 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:30:20.534 15:15:39 -- host/discovery.sh@131 -- # get_notification_count 00:30:20.534 15:15:39 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:20.534 15:15:39 -- host/discovery.sh@74 -- # jq '. | length' 00:30:20.534 15:15:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.534 15:15:39 -- common/autotest_common.sh@10 -- # set +x 00:30:20.534 15:15:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.534 15:15:39 -- host/discovery.sh@74 -- # notification_count=0 00:30:20.534 15:15:39 -- host/discovery.sh@75 -- # notify_id=2 00:30:20.534 15:15:39 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:30:20.534 15:15:39 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:20.534 15:15:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.534 15:15:39 -- common/autotest_common.sh@10 -- # set +x 00:30:20.534 15:15:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.534 15:15:39 -- host/discovery.sh@135 -- # sleep 1 00:30:21.471 15:15:40 -- host/discovery.sh@136 -- # get_subsystem_names 00:30:21.471 15:15:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:21.471 15:15:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:21.471 15:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.471 15:15:40 -- host/discovery.sh@59 -- # sort 00:30:21.471 15:15:40 -- common/autotest_common.sh@10 -- # set +x 00:30:21.471 15:15:40 -- host/discovery.sh@59 -- # xargs 00:30:21.471 15:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.471 15:15:40 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:30:21.471 15:15:40 -- host/discovery.sh@137 -- # get_bdev_list 00:30:21.471 15:15:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:21.471 15:15:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:21.471 15:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.471 15:15:40 -- host/discovery.sh@55 -- # sort 00:30:21.471 15:15:40 -- common/autotest_common.sh@10 -- # set +x 00:30:21.471 15:15:40 -- host/discovery.sh@55 -- # xargs 00:30:21.471 15:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.730 15:15:40 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:30:21.730 15:15:40 -- host/discovery.sh@138 -- # get_notification_count 00:30:21.730 15:15:40 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:21.730 15:15:40 -- host/discovery.sh@74 -- # jq '. | length' 00:30:21.730 15:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.730 15:15:40 -- common/autotest_common.sh@10 -- # set +x 00:30:21.730 15:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.730 15:15:40 -- host/discovery.sh@74 -- # notification_count=2 00:30:21.730 15:15:40 -- host/discovery.sh@75 -- # notify_id=4 00:30:21.730 15:15:40 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:30:21.730 15:15:40 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:21.730 15:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.730 15:15:40 -- common/autotest_common.sh@10 -- # set +x 00:30:22.667 [2024-06-11 15:15:41.431225] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:22.667 [2024-06-11 15:15:41.431249] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:22.667 [2024-06-11 15:15:41.431268] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:22.926 [2024-06-11 15:15:41.517536] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:22.926 [2024-06-11 15:15:41.584828] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:22.926 [2024-06-11 15:15:41.584866] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:22.926 15:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:22.926 15:15:41 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.926 15:15:41 -- common/autotest_common.sh@640 -- # local es=0 00:30:22.926 15:15:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.926 15:15:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:22.926 15:15:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:22.926 15:15:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:22.926 15:15:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:22.926 15:15:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.926 15:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.926 15:15:41 -- common/autotest_common.sh@10 -- # set +x 00:30:22.926 request: 00:30:22.926 { 00:30:22.926 "name": "nvme", 00:30:22.926 "trtype": "tcp", 00:30:22.926 "traddr": "10.0.0.2", 00:30:22.926 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:22.926 "adrfam": "ipv4", 00:30:22.926 "trsvcid": "8009", 00:30:22.926 "wait_for_attach": true, 00:30:22.926 "method": "bdev_nvme_start_discovery", 00:30:22.926 "req_id": 1 00:30:22.926 } 00:30:22.926 Got JSON-RPC error response 00:30:22.926 response: 00:30:22.926 { 00:30:22.926 "code": -17, 00:30:22.926 "message": "File exists" 00:30:22.926 } 00:30:22.926 15:15:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:22.926 15:15:41 -- common/autotest_common.sh@643 -- # es=1 00:30:22.926 15:15:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:22.926 15:15:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:22.926 15:15:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:22.926 15:15:41 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:30:22.926 15:15:41 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:22.926 15:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.926 15:15:41 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:22.926 15:15:41 -- common/autotest_common.sh@10 -- # set +x 00:30:22.926 15:15:41 -- host/discovery.sh@67 -- # sort 00:30:22.926 15:15:41 -- host/discovery.sh@67 -- # xargs 00:30:22.927 15:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:22.927 15:15:41 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:30:22.927 15:15:41 -- host/discovery.sh@147 -- # get_bdev_list 00:30:22.927 15:15:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.927 15:15:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.927 15:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.927 15:15:41 -- host/discovery.sh@55 -- # sort 00:30:22.927 15:15:41 -- common/autotest_common.sh@10 -- # set +x 00:30:22.927 15:15:41 -- host/discovery.sh@55 -- # xargs 00:30:22.927 15:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:22.927 15:15:41 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:22.927 15:15:41 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.927 15:15:41 -- common/autotest_common.sh@640 -- # local es=0 00:30:22.927 15:15:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.927 15:15:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:22.927 15:15:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:22.927 15:15:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:22.927 15:15:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:22.927 15:15:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:22.927 15:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.927 15:15:41 -- common/autotest_common.sh@10 -- # set +x 00:30:22.927 request: 00:30:22.927 { 00:30:22.927 "name": "nvme_second", 00:30:22.927 "trtype": "tcp", 00:30:22.927 "traddr": "10.0.0.2", 00:30:22.927 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:22.927 "adrfam": "ipv4", 00:30:22.927 "trsvcid": "8009", 00:30:22.927 "wait_for_attach": true, 00:30:22.927 "method": "bdev_nvme_start_discovery", 00:30:22.927 "req_id": 1 00:30:22.927 } 00:30:22.927 Got JSON-RPC error response 00:30:22.927 response: 00:30:22.927 { 00:30:22.927 "code": -17, 00:30:22.927 "message": "File exists" 00:30:22.927 } 00:30:22.927 15:15:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:22.927 15:15:41 -- common/autotest_common.sh@643 -- # es=1 00:30:22.927 15:15:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:22.927 15:15:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:22.927 15:15:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:22.927 15:15:41 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:30:22.927 15:15:41 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:22.927 15:15:41 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:22.927 15:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.927 15:15:41 -- common/autotest_common.sh@10 -- # set +x 00:30:22.927 15:15:41 -- host/discovery.sh@67 -- # sort 00:30:22.927 15:15:41 -- host/discovery.sh@67 -- # xargs 00:30:22.927 15:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.186 15:15:41 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:30:23.186 15:15:41 -- host/discovery.sh@153 -- # get_bdev_list 00:30:23.186 15:15:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:23.186 15:15:41 -- host/discovery.sh@55 -- # xargs 00:30:23.186 15:15:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:23.186 15:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.186 15:15:41 -- host/discovery.sh@55 -- # sort 00:30:23.186 15:15:41 -- common/autotest_common.sh@10 -- # set +x 00:30:23.186 15:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.186 15:15:41 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:23.186 15:15:41 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:23.186 15:15:41 -- common/autotest_common.sh@640 -- # local es=0 00:30:23.186 15:15:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:23.186 15:15:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:23.186 15:15:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:23.186 15:15:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:23.186 15:15:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:23.186 15:15:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:23.186 15:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.186 15:15:41 -- common/autotest_common.sh@10 -- # set +x 00:30:24.122 [2024-06-11 15:15:42.836534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.123 [2024-06-11 15:15:42.836924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.123 [2024-06-11 15:15:42.836942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc812c0 with addr=10.0.0.2, port=8010 00:30:24.123 [2024-06-11 15:15:42.836958] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:24.123 [2024-06-11 15:15:42.836967] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:24.123 [2024-06-11 15:15:42.836977] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:25.060 [2024-06-11 15:15:43.838901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.060 [2024-06-11 15:15:43.839147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.060 [2024-06-11 15:15:43.839166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc80b20 with addr=10.0.0.2, port=8010 00:30:25.060 [2024-06-11 15:15:43.839182] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:25.060 [2024-06-11 15:15:43.839190] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:25.060 [2024-06-11 15:15:43.839199] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:26.437 [2024-06-11 15:15:44.841029] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:26.437 request: 00:30:26.437 { 00:30:26.437 "name": "nvme_second", 00:30:26.437 "trtype": "tcp", 00:30:26.437 "traddr": "10.0.0.2", 00:30:26.437 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:26.437 "adrfam": "ipv4", 00:30:26.437 "trsvcid": "8010", 00:30:26.437 "attach_timeout_ms": 3000, 00:30:26.437 "method": "bdev_nvme_start_discovery", 00:30:26.437 "req_id": 1 00:30:26.437 } 00:30:26.437 Got JSON-RPC error response 00:30:26.437 response: 00:30:26.437 { 00:30:26.437 "code": -110, 00:30:26.437 "message": "Connection timed out" 00:30:26.437 } 00:30:26.437 15:15:44 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:26.437 15:15:44 -- common/autotest_common.sh@643 -- # es=1 00:30:26.437 15:15:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:26.437 15:15:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:26.437 15:15:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:26.437 15:15:44 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:30:26.437 15:15:44 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:26.437 15:15:44 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:26.437 15:15:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:26.437 15:15:44 -- host/discovery.sh@67 -- # sort 00:30:26.437 15:15:44 -- common/autotest_common.sh@10 -- # set +x 00:30:26.437 15:15:44 -- host/discovery.sh@67 -- # xargs 00:30:26.437 15:15:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:26.437 15:15:44 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:30:26.437 15:15:44 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:30:26.437 15:15:44 -- host/discovery.sh@162 -- # kill 3454827 00:30:26.437 15:15:44 -- host/discovery.sh@163 -- # nvmftestfini 00:30:26.437 15:15:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:26.437 15:15:44 -- nvmf/common.sh@116 -- # sync 00:30:26.437 15:15:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:26.437 15:15:44 -- nvmf/common.sh@119 -- # set +e 00:30:26.437 15:15:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:26.437 15:15:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:26.437 rmmod nvme_tcp 00:30:26.437 rmmod nvme_fabrics 00:30:26.437 rmmod nvme_keyring 00:30:26.437 15:15:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:26.437 15:15:44 -- nvmf/common.sh@123 -- # set -e 00:30:26.437 15:15:44 -- nvmf/common.sh@124 -- # return 0 00:30:26.437 15:15:44 -- nvmf/common.sh@477 -- # '[' -n 3454764 ']' 00:30:26.437 15:15:44 -- nvmf/common.sh@478 -- # killprocess 3454764 00:30:26.437 15:15:44 -- common/autotest_common.sh@926 -- # '[' -z 3454764 ']' 00:30:26.437 15:15:44 -- common/autotest_common.sh@930 -- # kill -0 3454764 00:30:26.437 15:15:44 -- common/autotest_common.sh@931 -- # uname 00:30:26.437 15:15:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:26.437 15:15:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3454764 00:30:26.437 15:15:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:26.437 15:15:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:26.437 15:15:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3454764' 00:30:26.437 killing process with pid 3454764 00:30:26.437 15:15:45 -- common/autotest_common.sh@945 -- # kill 3454764 00:30:26.437 15:15:45 -- common/autotest_common.sh@950 -- # wait 3454764 00:30:26.437 15:15:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:26.437 15:15:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:26.437 15:15:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:26.437 15:15:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:26.437 15:15:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:26.437 15:15:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.437 15:15:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.437 15:15:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.972 15:15:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:28.972 00:30:28.972 real 0m21.988s 00:30:28.972 user 0m28.775s 00:30:28.972 sys 0m6.336s 00:30:28.972 15:15:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:28.972 15:15:47 -- common/autotest_common.sh@10 -- # set +x 00:30:28.972 ************************************ 00:30:28.972 END TEST nvmf_discovery 00:30:28.972 ************************************ 00:30:28.972 15:15:47 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:28.972 15:15:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:28.972 15:15:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:28.972 15:15:47 -- common/autotest_common.sh@10 -- # set +x 00:30:28.972 ************************************ 00:30:28.972 START TEST nvmf_discovery_remove_ifc 00:30:28.972 ************************************ 00:30:28.972 15:15:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:28.972 * Looking for test storage... 00:30:28.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:28.972 15:15:47 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.972 15:15:47 -- nvmf/common.sh@7 -- # uname -s 00:30:28.972 15:15:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.972 15:15:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.972 15:15:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.972 15:15:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.972 15:15:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.972 15:15:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.972 15:15:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.973 15:15:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.973 15:15:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.973 15:15:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.973 15:15:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:28.973 15:15:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:28.973 15:15:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.973 15:15:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.973 15:15:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.973 15:15:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.973 15:15:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.973 15:15:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.973 15:15:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.973 15:15:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.973 15:15:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.973 15:15:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.973 15:15:47 -- paths/export.sh@5 -- # export PATH 00:30:28.973 15:15:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.973 15:15:47 -- nvmf/common.sh@46 -- # : 0 00:30:28.973 15:15:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:28.973 15:15:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:28.973 15:15:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:28.973 15:15:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.973 15:15:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.973 15:15:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:28.973 15:15:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:28.973 15:15:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:28.973 15:15:47 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:28.973 15:15:47 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:28.973 15:15:47 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:28.973 15:15:47 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:28.973 15:15:47 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:28.973 15:15:47 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:28.973 15:15:47 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:28.973 15:15:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:28.973 15:15:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.973 15:15:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:28.973 15:15:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:28.973 15:15:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:28.973 15:15:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.973 15:15:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:28.973 15:15:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.973 15:15:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:28.973 15:15:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:28.973 15:15:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:28.973 15:15:47 -- common/autotest_common.sh@10 -- # set +x 00:30:35.540 15:15:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:35.540 15:15:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:35.540 15:15:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:35.540 15:15:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:35.540 15:15:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:35.540 15:15:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:35.540 15:15:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:35.540 15:15:53 -- nvmf/common.sh@294 -- # net_devs=() 00:30:35.540 15:15:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:35.540 15:15:53 -- nvmf/common.sh@295 -- # e810=() 00:30:35.540 15:15:53 -- nvmf/common.sh@295 -- # local -ga e810 00:30:35.540 15:15:53 -- nvmf/common.sh@296 -- # x722=() 00:30:35.540 15:15:53 -- nvmf/common.sh@296 -- # local -ga x722 00:30:35.540 15:15:53 -- nvmf/common.sh@297 -- # mlx=() 00:30:35.540 15:15:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:35.540 15:15:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.540 15:15:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:35.540 15:15:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:35.540 15:15:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:35.540 15:15:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:35.540 15:15:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:35.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:35.540 15:15:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:35.540 15:15:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:35.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:35.540 15:15:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:35.540 15:15:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:35.540 15:15:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.540 15:15:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:35.540 15:15:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.540 15:15:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:35.540 Found net devices under 0000:af:00.0: cvl_0_0 00:30:35.540 15:15:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.540 15:15:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:35.540 15:15:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.540 15:15:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:35.540 15:15:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.540 15:15:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:35.540 Found net devices under 0000:af:00.1: cvl_0_1 00:30:35.540 15:15:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.540 15:15:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:35.540 15:15:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:35.540 15:15:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:35.540 15:15:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.540 15:15:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.540 15:15:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.540 15:15:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:35.540 15:15:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.540 15:15:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.540 15:15:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:35.540 15:15:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.540 15:15:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.540 15:15:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:35.540 15:15:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:35.540 15:15:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.540 15:15:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.540 15:15:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.540 15:15:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.540 15:15:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:35.540 15:15:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.540 15:15:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.540 15:15:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.540 15:15:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:35.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:30:35.540 00:30:35.540 --- 10.0.0.2 ping statistics --- 00:30:35.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.540 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:30:35.540 15:15:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:30:35.540 00:30:35.540 --- 10.0.0.1 ping statistics --- 00:30:35.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.540 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:35.540 15:15:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.540 15:15:53 -- nvmf/common.sh@410 -- # return 0 00:30:35.540 15:15:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:35.540 15:15:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.540 15:15:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:35.540 15:15:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.540 15:15:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:35.540 15:15:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:35.540 15:15:53 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:35.541 15:15:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:35.541 15:15:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:35.541 15:15:53 -- common/autotest_common.sh@10 -- # set +x 00:30:35.541 15:15:54 -- nvmf/common.sh@469 -- # nvmfpid=3461114 00:30:35.541 15:15:54 -- nvmf/common.sh@470 -- # waitforlisten 3461114 00:30:35.541 15:15:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:35.541 15:15:54 -- common/autotest_common.sh@819 -- # '[' -z 3461114 ']' 00:30:35.541 15:15:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.541 15:15:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:35.541 15:15:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.541 15:15:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:35.541 15:15:54 -- common/autotest_common.sh@10 -- # set +x 00:30:35.541 [2024-06-11 15:15:54.055468] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:35.541 [2024-06-11 15:15:54.055522] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.541 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.541 [2024-06-11 15:15:54.143227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.541 [2024-06-11 15:15:54.233824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:35.541 [2024-06-11 15:15:54.233970] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.541 [2024-06-11 15:15:54.233981] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.541 [2024-06-11 15:15:54.233990] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.541 [2024-06-11 15:15:54.234012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.475 15:15:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:36.475 15:15:54 -- common/autotest_common.sh@852 -- # return 0 00:30:36.475 15:15:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:36.475 15:15:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:36.475 15:15:54 -- common/autotest_common.sh@10 -- # set +x 00:30:36.475 15:15:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.475 15:15:55 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:36.475 15:15:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.475 15:15:55 -- common/autotest_common.sh@10 -- # set +x 00:30:36.475 [2024-06-11 15:15:55.027911] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.475 [2024-06-11 15:15:55.036100] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:36.475 null0 00:30:36.475 [2024-06-11 15:15:55.068072] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.475 15:15:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.475 15:15:55 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3461391 00:30:36.475 15:15:55 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3461391 /tmp/host.sock 00:30:36.475 15:15:55 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:36.475 15:15:55 -- common/autotest_common.sh@819 -- # '[' -z 3461391 ']' 00:30:36.475 15:15:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:30:36.475 15:15:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:36.475 15:15:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:36.475 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:36.475 15:15:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:36.475 15:15:55 -- common/autotest_common.sh@10 -- # set +x 00:30:36.475 [2024-06-11 15:15:55.134388] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:36.475 [2024-06-11 15:15:55.134442] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461391 ] 00:30:36.475 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.475 [2024-06-11 15:15:55.216323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.475 [2024-06-11 15:15:55.306600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:36.475 [2024-06-11 15:15:55.306754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.733 15:15:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:36.733 15:15:55 -- common/autotest_common.sh@852 -- # return 0 00:30:36.733 15:15:55 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:36.733 15:15:55 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:36.733 15:15:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.733 15:15:55 -- common/autotest_common.sh@10 -- # set +x 00:30:36.733 15:15:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.733 15:15:55 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:36.733 15:15:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.733 15:15:55 -- common/autotest_common.sh@10 -- # set +x 00:30:36.733 15:15:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.733 15:15:55 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:36.733 15:15:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.733 15:15:55 -- common/autotest_common.sh@10 -- # set +x 00:30:37.715 [2024-06-11 15:15:56.486241] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:37.715 [2024-06-11 15:15:56.486267] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:37.715 [2024-06-11 15:15:56.486286] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:37.974 [2024-06-11 15:15:56.574588] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:37.974 [2024-06-11 15:15:56.636455] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:37.974 [2024-06-11 15:15:56.636509] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:37.974 [2024-06-11 15:15:56.636537] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:37.974 [2024-06-11 15:15:56.636555] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:37.974 [2024-06-11 15:15:56.636581] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:37.974 15:15:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:37.975 15:15:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:37.975 15:15:56 -- common/autotest_common.sh@10 -- # set +x 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:37.975 [2024-06-11 15:15:56.644443] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7cbb00 was disconnected and freed. delete nvme_qpair. 00:30:37.975 15:15:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:37.975 15:15:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.975 15:15:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:37.975 15:15:56 -- common/autotest_common.sh@10 -- # set +x 00:30:38.233 15:15:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:38.233 15:15:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:38.233 15:15:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:39.169 15:15:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:39.169 15:15:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:39.169 15:15:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:39.169 15:15:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:39.169 15:15:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:39.169 15:15:57 -- common/autotest_common.sh@10 -- # set +x 00:30:39.169 15:15:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:39.169 15:15:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:39.169 15:15:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:39.169 15:15:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:40.104 15:15:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:40.104 15:15:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:40.104 15:15:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:40.104 15:15:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:40.104 15:15:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.104 15:15:58 -- common/autotest_common.sh@10 -- # set +x 00:30:40.104 15:15:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:40.104 15:15:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:40.362 15:15:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:40.362 15:15:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:41.299 15:15:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:41.299 15:15:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:41.299 15:15:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:41.299 15:15:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:41.299 15:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.299 15:15:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:41.299 15:15:59 -- common/autotest_common.sh@10 -- # set +x 00:30:41.299 15:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.299 15:16:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:41.299 15:16:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:42.233 15:16:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:42.233 15:16:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.233 15:16:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:42.233 15:16:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:42.233 15:16:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.233 15:16:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:42.233 15:16:01 -- common/autotest_common.sh@10 -- # set +x 00:30:42.233 15:16:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.491 15:16:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:42.492 15:16:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:43.426 [2024-06-11 15:16:02.077278] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:43.426 [2024-06-11 15:16:02.077329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.426 [2024-06-11 15:16:02.077344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.426 [2024-06-11 15:16:02.077356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.426 [2024-06-11 15:16:02.077367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.426 [2024-06-11 15:16:02.077377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.426 [2024-06-11 15:16:02.077387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.426 [2024-06-11 15:16:02.077398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.426 [2024-06-11 15:16:02.077408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.426 [2024-06-11 15:16:02.077419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.426 [2024-06-11 15:16:02.077429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.426 [2024-06-11 15:16:02.077439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x792dd0 is same with the state(5) to be set 00:30:43.426 [2024-06-11 15:16:02.087298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x792dd0 (9): Bad file descriptor 00:30:43.426 15:16:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:43.426 15:16:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:43.426 15:16:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:43.426 15:16:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:43.426 15:16:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:43.426 15:16:02 -- common/autotest_common.sh@10 -- # set +x 00:30:43.426 15:16:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:43.426 [2024-06-11 15:16:02.097345] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:44.360 [2024-06-11 15:16:03.158065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:45.735 [2024-06-11 15:16:04.182121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:45.735 [2024-06-11 15:16:04.182196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x792dd0 with addr=10.0.0.2, port=4420 00:30:45.735 [2024-06-11 15:16:04.182226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x792dd0 is same with the state(5) to be set 00:30:45.735 [2024-06-11 15:16:04.182379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x792dd0 (9): Bad file descriptor 00:30:45.735 [2024-06-11 15:16:04.182427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.735 [2024-06-11 15:16:04.182475] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:45.735 [2024-06-11 15:16:04.182524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.735 [2024-06-11 15:16:04.182551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.735 [2024-06-11 15:16:04.182577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.735 [2024-06-11 15:16:04.182598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.735 [2024-06-11 15:16:04.182621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.735 [2024-06-11 15:16:04.182642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.735 [2024-06-11 15:16:04.182666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.735 [2024-06-11 15:16:04.182686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.736 [2024-06-11 15:16:04.182709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.736 [2024-06-11 15:16:04.182730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.736 [2024-06-11 15:16:04.182752] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:45.736 [2024-06-11 15:16:04.183272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7922c0 (9): Bad file descriptor 00:30:45.736 [2024-06-11 15:16:04.184294] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:45.736 [2024-06-11 15:16:04.184325] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:45.736 15:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.736 15:16:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:45.736 15:16:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:46.670 15:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.670 15:16:05 -- common/autotest_common.sh@10 -- # set +x 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:46.670 15:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:46.670 15:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:46.670 15:16:05 -- common/autotest_common.sh@10 -- # set +x 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:46.670 15:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:46.670 15:16:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:47.607 [2024-06-11 15:16:06.201453] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:47.607 [2024-06-11 15:16:06.201474] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:47.607 [2024-06-11 15:16:06.201493] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:47.607 [2024-06-11 15:16:06.327940] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:47.607 [2024-06-11 15:16:06.390812] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:47.607 [2024-06-11 15:16:06.390855] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:47.607 [2024-06-11 15:16:06.390879] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:47.607 [2024-06-11 15:16:06.390897] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:47.607 [2024-06-11 15:16:06.390906] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:47.607 [2024-06-11 15:16:06.400429] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7d65b0 was disconnected and freed. delete nvme_qpair. 00:30:47.607 15:16:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:47.607 15:16:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.607 15:16:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:47.607 15:16:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.607 15:16:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:47.607 15:16:06 -- common/autotest_common.sh@10 -- # set +x 00:30:47.607 15:16:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:47.607 15:16:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.866 15:16:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:47.866 15:16:06 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:47.866 15:16:06 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3461391 00:30:47.866 15:16:06 -- common/autotest_common.sh@926 -- # '[' -z 3461391 ']' 00:30:47.866 15:16:06 -- common/autotest_common.sh@930 -- # kill -0 3461391 00:30:47.866 15:16:06 -- common/autotest_common.sh@931 -- # uname 00:30:47.866 15:16:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:47.866 15:16:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3461391 00:30:47.866 15:16:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:47.866 15:16:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:47.866 15:16:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3461391' 00:30:47.866 killing process with pid 3461391 00:30:47.866 15:16:06 -- common/autotest_common.sh@945 -- # kill 3461391 00:30:47.866 15:16:06 -- common/autotest_common.sh@950 -- # wait 3461391 00:30:48.126 15:16:06 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:48.126 15:16:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:48.126 15:16:06 -- nvmf/common.sh@116 -- # sync 00:30:48.126 15:16:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:48.126 15:16:06 -- nvmf/common.sh@119 -- # set +e 00:30:48.126 15:16:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:48.126 15:16:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:48.126 rmmod nvme_tcp 00:30:48.126 rmmod nvme_fabrics 00:30:48.126 rmmod nvme_keyring 00:30:48.126 15:16:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:48.126 15:16:06 -- nvmf/common.sh@123 -- # set -e 00:30:48.126 15:16:06 -- nvmf/common.sh@124 -- # return 0 00:30:48.126 15:16:06 -- nvmf/common.sh@477 -- # '[' -n 3461114 ']' 00:30:48.126 15:16:06 -- nvmf/common.sh@478 -- # killprocess 3461114 00:30:48.126 15:16:06 -- common/autotest_common.sh@926 -- # '[' -z 3461114 ']' 00:30:48.126 15:16:06 -- common/autotest_common.sh@930 -- # kill -0 3461114 00:30:48.126 15:16:06 -- common/autotest_common.sh@931 -- # uname 00:30:48.126 15:16:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:48.126 15:16:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3461114 00:30:48.126 15:16:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:48.126 15:16:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:48.126 15:16:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3461114' 00:30:48.126 killing process with pid 3461114 00:30:48.126 15:16:06 -- common/autotest_common.sh@945 -- # kill 3461114 00:30:48.126 15:16:06 -- common/autotest_common.sh@950 -- # wait 3461114 00:30:48.385 15:16:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:48.385 15:16:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:48.385 15:16:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:48.385 15:16:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:48.385 15:16:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:48.385 15:16:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.385 15:16:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:48.385 15:16:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.917 15:16:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:50.917 00:30:50.917 real 0m21.775s 00:30:50.917 user 0m24.747s 00:30:50.917 sys 0m6.235s 00:30:50.917 15:16:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:50.917 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:30:50.917 ************************************ 00:30:50.917 END TEST nvmf_discovery_remove_ifc 00:30:50.917 ************************************ 00:30:50.917 15:16:09 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:30:50.917 15:16:09 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:50.917 15:16:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:50.917 15:16:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:50.917 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:30:50.917 ************************************ 00:30:50.917 START TEST nvmf_digest 00:30:50.917 ************************************ 00:30:50.917 15:16:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:50.917 * Looking for test storage... 00:30:50.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:50.917 15:16:09 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.917 15:16:09 -- nvmf/common.sh@7 -- # uname -s 00:30:50.917 15:16:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.917 15:16:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.917 15:16:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.917 15:16:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.917 15:16:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.917 15:16:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.917 15:16:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.917 15:16:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.917 15:16:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.917 15:16:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.917 15:16:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:50.917 15:16:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:50.917 15:16:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.917 15:16:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.917 15:16:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.917 15:16:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.917 15:16:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.917 15:16:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.917 15:16:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.917 15:16:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.917 15:16:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.917 15:16:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.917 15:16:09 -- paths/export.sh@5 -- # export PATH 00:30:50.917 15:16:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.917 15:16:09 -- nvmf/common.sh@46 -- # : 0 00:30:50.917 15:16:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:50.917 15:16:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:50.917 15:16:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:50.917 15:16:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.917 15:16:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.917 15:16:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:50.917 15:16:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:50.917 15:16:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:50.917 15:16:09 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:50.917 15:16:09 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:50.917 15:16:09 -- host/digest.sh@16 -- # runtime=2 00:30:50.917 15:16:09 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:30:50.917 15:16:09 -- host/digest.sh@132 -- # nvmftestinit 00:30:50.917 15:16:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:50.917 15:16:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.917 15:16:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:50.917 15:16:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:50.917 15:16:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:50.917 15:16:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.917 15:16:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:50.917 15:16:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.917 15:16:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:50.917 15:16:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:50.917 15:16:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:50.917 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:30:57.499 15:16:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:57.499 15:16:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:57.499 15:16:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:57.499 15:16:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:57.499 15:16:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:57.499 15:16:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:57.499 15:16:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:57.499 15:16:15 -- nvmf/common.sh@294 -- # net_devs=() 00:30:57.499 15:16:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:57.499 15:16:15 -- nvmf/common.sh@295 -- # e810=() 00:30:57.499 15:16:15 -- nvmf/common.sh@295 -- # local -ga e810 00:30:57.499 15:16:15 -- nvmf/common.sh@296 -- # x722=() 00:30:57.499 15:16:15 -- nvmf/common.sh@296 -- # local -ga x722 00:30:57.499 15:16:15 -- nvmf/common.sh@297 -- # mlx=() 00:30:57.499 15:16:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:57.499 15:16:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.499 15:16:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.500 15:16:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.500 15:16:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.500 15:16:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.500 15:16:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.500 15:16:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.500 15:16:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.500 15:16:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.500 15:16:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.500 15:16:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.500 15:16:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:57.500 15:16:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:57.500 15:16:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:57.500 15:16:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:57.500 15:16:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:57.500 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:57.500 15:16:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:57.500 15:16:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:57.500 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:57.500 15:16:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:57.500 15:16:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:57.500 15:16:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.500 15:16:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:57.500 15:16:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.500 15:16:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:57.500 Found net devices under 0000:af:00.0: cvl_0_0 00:30:57.500 15:16:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.500 15:16:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:57.500 15:16:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.500 15:16:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:57.500 15:16:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.500 15:16:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:57.500 Found net devices under 0000:af:00.1: cvl_0_1 00:30:57.500 15:16:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.500 15:16:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:57.500 15:16:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:57.500 15:16:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:57.500 15:16:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.500 15:16:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.500 15:16:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.500 15:16:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:57.500 15:16:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.500 15:16:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.500 15:16:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:57.500 15:16:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.500 15:16:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.500 15:16:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:57.500 15:16:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:57.500 15:16:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.500 15:16:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.500 15:16:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.500 15:16:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.500 15:16:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:57.500 15:16:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.500 15:16:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.500 15:16:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.500 15:16:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:57.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:30:57.500 00:30:57.500 --- 10.0.0.2 ping statistics --- 00:30:57.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.500 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:30:57.500 15:16:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:30:57.500 00:30:57.500 --- 10.0.0.1 ping statistics --- 00:30:57.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.500 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:30:57.500 15:16:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.500 15:16:15 -- nvmf/common.sh@410 -- # return 0 00:30:57.500 15:16:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:57.500 15:16:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.500 15:16:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:57.500 15:16:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.500 15:16:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:57.500 15:16:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:57.500 15:16:15 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:57.500 15:16:15 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:30:57.500 15:16:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:57.500 15:16:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:57.500 15:16:15 -- common/autotest_common.sh@10 -- # set +x 00:30:57.500 ************************************ 00:30:57.500 START TEST nvmf_digest_clean 00:30:57.500 ************************************ 00:30:57.500 15:16:15 -- common/autotest_common.sh@1104 -- # run_digest 00:30:57.500 15:16:15 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:30:57.500 15:16:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:57.500 15:16:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:57.500 15:16:15 -- common/autotest_common.sh@10 -- # set +x 00:30:57.500 15:16:15 -- nvmf/common.sh@469 -- # nvmfpid=3467592 00:30:57.500 15:16:15 -- nvmf/common.sh@470 -- # waitforlisten 3467592 00:30:57.500 15:16:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:57.500 15:16:15 -- common/autotest_common.sh@819 -- # '[' -z 3467592 ']' 00:30:57.500 15:16:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.500 15:16:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:57.500 15:16:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.500 15:16:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:57.500 15:16:15 -- common/autotest_common.sh@10 -- # set +x 00:30:57.500 [2024-06-11 15:16:15.885277] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:57.500 [2024-06-11 15:16:15.885335] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.500 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.500 [2024-06-11 15:16:15.979854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.500 [2024-06-11 15:16:16.068044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:57.500 [2024-06-11 15:16:16.068185] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.500 [2024-06-11 15:16:16.068197] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.500 [2024-06-11 15:16:16.068207] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.500 [2024-06-11 15:16:16.068227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.069 15:16:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:58.069 15:16:16 -- common/autotest_common.sh@852 -- # return 0 00:30:58.069 15:16:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:58.069 15:16:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:58.069 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:30:58.069 15:16:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.069 15:16:16 -- host/digest.sh@120 -- # common_target_config 00:30:58.069 15:16:16 -- host/digest.sh@43 -- # rpc_cmd 00:30:58.069 15:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:58.069 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:30:58.329 null0 00:30:58.329 [2024-06-11 15:16:16.947126] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.329 [2024-06-11 15:16:16.971314] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.329 15:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:58.329 15:16:16 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:30:58.329 15:16:16 -- host/digest.sh@77 -- # local rw bs qd 00:30:58.329 15:16:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:58.329 15:16:16 -- host/digest.sh@80 -- # rw=randread 00:30:58.329 15:16:16 -- host/digest.sh@80 -- # bs=4096 00:30:58.329 15:16:16 -- host/digest.sh@80 -- # qd=128 00:30:58.329 15:16:16 -- host/digest.sh@82 -- # bperfpid=3467682 00:30:58.329 15:16:16 -- host/digest.sh@83 -- # waitforlisten 3467682 /var/tmp/bperf.sock 00:30:58.329 15:16:16 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:58.329 15:16:16 -- common/autotest_common.sh@819 -- # '[' -z 3467682 ']' 00:30:58.329 15:16:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:58.329 15:16:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:58.329 15:16:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:58.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:58.329 15:16:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:58.329 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:30:58.329 [2024-06-11 15:16:17.023815] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:58.329 [2024-06-11 15:16:17.023870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467682 ] 00:30:58.329 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.329 [2024-06-11 15:16:17.104674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.588 [2024-06-11 15:16:17.192503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.155 15:16:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:59.155 15:16:17 -- common/autotest_common.sh@852 -- # return 0 00:30:59.155 15:16:17 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:59.155 15:16:17 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:59.155 15:16:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:59.414 15:16:18 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.414 15:16:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.673 nvme0n1 00:30:59.932 15:16:18 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:59.932 15:16:18 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:59.932 Running I/O for 2 seconds... 00:31:01.837 00:31:01.837 Latency(us) 00:31:01.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.837 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:01.838 nvme0n1 : 2.00 19266.93 75.26 0.00 0.00 6634.57 3098.07 12868.89 00:31:01.838 =================================================================================================================== 00:31:01.838 Total : 19266.93 75.26 0.00 0.00 6634.57 3098.07 12868.89 00:31:01.838 0 00:31:01.838 15:16:20 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:01.838 15:16:20 -- host/digest.sh@92 -- # get_accel_stats 00:31:01.838 15:16:20 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:01.838 15:16:20 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:01.838 | select(.opcode=="crc32c") 00:31:01.838 | "\(.module_name) \(.executed)"' 00:31:01.838 15:16:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:02.097 15:16:20 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:02.097 15:16:20 -- host/digest.sh@93 -- # exp_module=software 00:31:02.097 15:16:20 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:02.097 15:16:20 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:02.097 15:16:20 -- host/digest.sh@97 -- # killprocess 3467682 00:31:02.097 15:16:20 -- common/autotest_common.sh@926 -- # '[' -z 3467682 ']' 00:31:02.097 15:16:20 -- common/autotest_common.sh@930 -- # kill -0 3467682 00:31:02.097 15:16:20 -- common/autotest_common.sh@931 -- # uname 00:31:02.097 15:16:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:02.097 15:16:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3467682 00:31:02.356 15:16:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:02.356 15:16:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:02.356 15:16:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3467682' 00:31:02.356 killing process with pid 3467682 00:31:02.356 15:16:20 -- common/autotest_common.sh@945 -- # kill 3467682 00:31:02.356 Received shutdown signal, test time was about 2.000000 seconds 00:31:02.356 00:31:02.357 Latency(us) 00:31:02.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.357 =================================================================================================================== 00:31:02.357 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.357 15:16:20 -- common/autotest_common.sh@950 -- # wait 3467682 00:31:02.357 15:16:21 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:31:02.357 15:16:21 -- host/digest.sh@77 -- # local rw bs qd 00:31:02.357 15:16:21 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:02.357 15:16:21 -- host/digest.sh@80 -- # rw=randread 00:31:02.357 15:16:21 -- host/digest.sh@80 -- # bs=131072 00:31:02.357 15:16:21 -- host/digest.sh@80 -- # qd=16 00:31:02.357 15:16:21 -- host/digest.sh@82 -- # bperfpid=3468443 00:31:02.357 15:16:21 -- host/digest.sh@83 -- # waitforlisten 3468443 /var/tmp/bperf.sock 00:31:02.357 15:16:21 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:02.357 15:16:21 -- common/autotest_common.sh@819 -- # '[' -z 3468443 ']' 00:31:02.357 15:16:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:02.357 15:16:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:02.357 15:16:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:02.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:02.357 15:16:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:02.357 15:16:21 -- common/autotest_common.sh@10 -- # set +x 00:31:02.615 [2024-06-11 15:16:21.232590] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:02.615 [2024-06-11 15:16:21.232651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468443 ] 00:31:02.615 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:02.615 Zero copy mechanism will not be used. 00:31:02.615 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.615 [2024-06-11 15:16:21.313821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.615 [2024-06-11 15:16:21.393649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.615 15:16:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:02.615 15:16:21 -- common/autotest_common.sh@852 -- # return 0 00:31:02.615 15:16:21 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:02.615 15:16:21 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:02.615 15:16:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:03.183 15:16:21 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.183 15:16:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.442 nvme0n1 00:31:03.442 15:16:22 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:03.442 15:16:22 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:03.700 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:03.700 Zero copy mechanism will not be used. 00:31:03.700 Running I/O for 2 seconds... 00:31:05.603 00:31:05.603 Latency(us) 00:31:05.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.603 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:05.603 nvme0n1 : 2.00 2800.25 350.03 0.00 0.00 5710.23 2249.08 18469.24 00:31:05.603 =================================================================================================================== 00:31:05.603 Total : 2800.25 350.03 0.00 0.00 5710.23 2249.08 18469.24 00:31:05.603 0 00:31:05.603 15:16:24 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:05.603 15:16:24 -- host/digest.sh@92 -- # get_accel_stats 00:31:05.603 15:16:24 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:05.603 15:16:24 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:05.603 | select(.opcode=="crc32c") 00:31:05.603 | "\(.module_name) \(.executed)"' 00:31:05.603 15:16:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:05.862 15:16:24 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:05.862 15:16:24 -- host/digest.sh@93 -- # exp_module=software 00:31:05.862 15:16:24 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:05.862 15:16:24 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:05.862 15:16:24 -- host/digest.sh@97 -- # killprocess 3468443 00:31:05.862 15:16:24 -- common/autotest_common.sh@926 -- # '[' -z 3468443 ']' 00:31:05.862 15:16:24 -- common/autotest_common.sh@930 -- # kill -0 3468443 00:31:05.862 15:16:24 -- common/autotest_common.sh@931 -- # uname 00:31:05.862 15:16:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:05.862 15:16:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3468443 00:31:05.862 15:16:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:05.862 15:16:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:05.862 15:16:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3468443' 00:31:05.862 killing process with pid 3468443 00:31:05.862 15:16:24 -- common/autotest_common.sh@945 -- # kill 3468443 00:31:05.862 Received shutdown signal, test time was about 2.000000 seconds 00:31:05.862 00:31:05.862 Latency(us) 00:31:05.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.862 =================================================================================================================== 00:31:05.862 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.862 15:16:24 -- common/autotest_common.sh@950 -- # wait 3468443 00:31:06.122 15:16:24 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:31:06.122 15:16:24 -- host/digest.sh@77 -- # local rw bs qd 00:31:06.122 15:16:24 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:06.122 15:16:24 -- host/digest.sh@80 -- # rw=randwrite 00:31:06.122 15:16:24 -- host/digest.sh@80 -- # bs=4096 00:31:06.122 15:16:24 -- host/digest.sh@80 -- # qd=128 00:31:06.122 15:16:24 -- host/digest.sh@82 -- # bperfpid=3469143 00:31:06.122 15:16:24 -- host/digest.sh@83 -- # waitforlisten 3469143 /var/tmp/bperf.sock 00:31:06.122 15:16:24 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:06.122 15:16:24 -- common/autotest_common.sh@819 -- # '[' -z 3469143 ']' 00:31:06.122 15:16:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:06.122 15:16:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:06.122 15:16:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:06.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:06.122 15:16:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:06.122 15:16:24 -- common/autotest_common.sh@10 -- # set +x 00:31:06.122 [2024-06-11 15:16:24.796520] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:06.122 [2024-06-11 15:16:24.796580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469143 ] 00:31:06.122 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.122 [2024-06-11 15:16:24.876666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.381 [2024-06-11 15:16:24.964207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.381 15:16:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:06.381 15:16:24 -- common/autotest_common.sh@852 -- # return 0 00:31:06.381 15:16:24 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:06.381 15:16:24 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:06.381 15:16:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:06.640 15:16:25 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.640 15:16:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.899 nvme0n1 00:31:06.899 15:16:25 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:06.899 15:16:25 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:07.158 Running I/O for 2 seconds... 00:31:09.060 00:31:09.060 Latency(us) 00:31:09.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.060 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.060 nvme0n1 : 2.01 18204.19 71.11 0.00 0.00 7015.99 6106.76 17873.45 00:31:09.060 =================================================================================================================== 00:31:09.060 Total : 18204.19 71.11 0.00 0.00 7015.99 6106.76 17873.45 00:31:09.060 0 00:31:09.060 15:16:27 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:09.060 15:16:27 -- host/digest.sh@92 -- # get_accel_stats 00:31:09.060 15:16:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:09.060 15:16:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:09.060 | select(.opcode=="crc32c") 00:31:09.060 | "\(.module_name) \(.executed)"' 00:31:09.060 15:16:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:09.318 15:16:28 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:09.319 15:16:28 -- host/digest.sh@93 -- # exp_module=software 00:31:09.319 15:16:28 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:09.319 15:16:28 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:09.319 15:16:28 -- host/digest.sh@97 -- # killprocess 3469143 00:31:09.319 15:16:28 -- common/autotest_common.sh@926 -- # '[' -z 3469143 ']' 00:31:09.319 15:16:28 -- common/autotest_common.sh@930 -- # kill -0 3469143 00:31:09.319 15:16:28 -- common/autotest_common.sh@931 -- # uname 00:31:09.319 15:16:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:09.319 15:16:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3469143 00:31:09.319 15:16:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:09.319 15:16:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:09.319 15:16:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3469143' 00:31:09.319 killing process with pid 3469143 00:31:09.319 15:16:28 -- common/autotest_common.sh@945 -- # kill 3469143 00:31:09.319 Received shutdown signal, test time was about 2.000000 seconds 00:31:09.319 00:31:09.319 Latency(us) 00:31:09.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.319 =================================================================================================================== 00:31:09.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.319 15:16:28 -- common/autotest_common.sh@950 -- # wait 3469143 00:31:09.577 15:16:28 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:31:09.577 15:16:28 -- host/digest.sh@77 -- # local rw bs qd 00:31:09.577 15:16:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:09.577 15:16:28 -- host/digest.sh@80 -- # rw=randwrite 00:31:09.577 15:16:28 -- host/digest.sh@80 -- # bs=131072 00:31:09.577 15:16:28 -- host/digest.sh@80 -- # qd=16 00:31:09.577 15:16:28 -- host/digest.sh@82 -- # bperfpid=3469788 00:31:09.577 15:16:28 -- host/digest.sh@83 -- # waitforlisten 3469788 /var/tmp/bperf.sock 00:31:09.577 15:16:28 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:09.577 15:16:28 -- common/autotest_common.sh@819 -- # '[' -z 3469788 ']' 00:31:09.577 15:16:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:09.577 15:16:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:09.577 15:16:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:09.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:09.577 15:16:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:09.577 15:16:28 -- common/autotest_common.sh@10 -- # set +x 00:31:09.577 [2024-06-11 15:16:28.332531] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:09.577 [2024-06-11 15:16:28.332591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469788 ] 00:31:09.577 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:09.577 Zero copy mechanism will not be used. 00:31:09.577 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.577 [2024-06-11 15:16:28.412084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.874 [2024-06-11 15:16:28.499472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.874 15:16:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:09.874 15:16:28 -- common/autotest_common.sh@852 -- # return 0 00:31:09.874 15:16:28 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:09.874 15:16:28 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:09.874 15:16:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:10.162 15:16:28 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:10.162 15:16:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:10.420 nvme0n1 00:31:10.420 15:16:29 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:10.420 15:16:29 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:10.679 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:10.679 Zero copy mechanism will not be used. 00:31:10.679 Running I/O for 2 seconds... 00:31:12.581 00:31:12.581 Latency(us) 00:31:12.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.581 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:12.581 nvme0n1 : 2.00 3178.96 397.37 0.00 0.00 5023.36 3753.43 14179.61 00:31:12.581 =================================================================================================================== 00:31:12.581 Total : 3178.96 397.37 0.00 0.00 5023.36 3753.43 14179.61 00:31:12.581 0 00:31:12.581 15:16:31 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:12.581 15:16:31 -- host/digest.sh@92 -- # get_accel_stats 00:31:12.581 15:16:31 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:12.581 15:16:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:12.581 15:16:31 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:12.581 | select(.opcode=="crc32c") 00:31:12.581 | "\(.module_name) \(.executed)"' 00:31:12.840 15:16:31 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:12.840 15:16:31 -- host/digest.sh@93 -- # exp_module=software 00:31:12.840 15:16:31 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:12.840 15:16:31 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:12.840 15:16:31 -- host/digest.sh@97 -- # killprocess 3469788 00:31:12.840 15:16:31 -- common/autotest_common.sh@926 -- # '[' -z 3469788 ']' 00:31:12.840 15:16:31 -- common/autotest_common.sh@930 -- # kill -0 3469788 00:31:12.840 15:16:31 -- common/autotest_common.sh@931 -- # uname 00:31:12.840 15:16:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:12.840 15:16:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3469788 00:31:12.840 15:16:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:12.840 15:16:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:12.840 15:16:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3469788' 00:31:12.840 killing process with pid 3469788 00:31:12.840 15:16:31 -- common/autotest_common.sh@945 -- # kill 3469788 00:31:12.840 Received shutdown signal, test time was about 2.000000 seconds 00:31:12.840 00:31:12.840 Latency(us) 00:31:12.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.840 =================================================================================================================== 00:31:12.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.840 15:16:31 -- common/autotest_common.sh@950 -- # wait 3469788 00:31:13.099 15:16:31 -- host/digest.sh@126 -- # killprocess 3467592 00:31:13.099 15:16:31 -- common/autotest_common.sh@926 -- # '[' -z 3467592 ']' 00:31:13.099 15:16:31 -- common/autotest_common.sh@930 -- # kill -0 3467592 00:31:13.099 15:16:31 -- common/autotest_common.sh@931 -- # uname 00:31:13.099 15:16:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:13.099 15:16:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3467592 00:31:13.099 15:16:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:13.099 15:16:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:13.100 15:16:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3467592' 00:31:13.100 killing process with pid 3467592 00:31:13.100 15:16:31 -- common/autotest_common.sh@945 -- # kill 3467592 00:31:13.100 15:16:31 -- common/autotest_common.sh@950 -- # wait 3467592 00:31:13.357 00:31:13.357 real 0m16.255s 00:31:13.357 user 0m31.760s 00:31:13.357 sys 0m4.054s 00:31:13.357 15:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.357 15:16:32 -- common/autotest_common.sh@10 -- # set +x 00:31:13.357 ************************************ 00:31:13.357 END TEST nvmf_digest_clean 00:31:13.357 ************************************ 00:31:13.357 15:16:32 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:31:13.357 15:16:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:13.358 15:16:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:13.358 15:16:32 -- common/autotest_common.sh@10 -- # set +x 00:31:13.358 ************************************ 00:31:13.358 START TEST nvmf_digest_error 00:31:13.358 ************************************ 00:31:13.358 15:16:32 -- common/autotest_common.sh@1104 -- # run_digest_error 00:31:13.358 15:16:32 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:31:13.358 15:16:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:13.358 15:16:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:13.358 15:16:32 -- common/autotest_common.sh@10 -- # set +x 00:31:13.358 15:16:32 -- nvmf/common.sh@469 -- # nvmfpid=3470404 00:31:13.358 15:16:32 -- nvmf/common.sh@470 -- # waitforlisten 3470404 00:31:13.358 15:16:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:13.358 15:16:32 -- common/autotest_common.sh@819 -- # '[' -z 3470404 ']' 00:31:13.358 15:16:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.358 15:16:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:13.358 15:16:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.358 15:16:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:13.358 15:16:32 -- common/autotest_common.sh@10 -- # set +x 00:31:13.358 [2024-06-11 15:16:32.182928] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:13.358 [2024-06-11 15:16:32.182985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.615 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.615 [2024-06-11 15:16:32.278120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.615 [2024-06-11 15:16:32.368142] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:13.615 [2024-06-11 15:16:32.368275] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.615 [2024-06-11 15:16:32.368286] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.615 [2024-06-11 15:16:32.368297] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.615 [2024-06-11 15:16:32.368317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.551 15:16:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:14.551 15:16:33 -- common/autotest_common.sh@852 -- # return 0 00:31:14.551 15:16:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:14.551 15:16:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:14.551 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:31:14.551 15:16:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.551 15:16:33 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:14.551 15:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.551 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:31:14.551 [2024-06-11 15:16:33.074448] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:14.551 15:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.551 15:16:33 -- host/digest.sh@104 -- # common_target_config 00:31:14.551 15:16:33 -- host/digest.sh@43 -- # rpc_cmd 00:31:14.551 15:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.551 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:31:14.551 null0 00:31:14.551 [2024-06-11 15:16:33.172989] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.551 [2024-06-11 15:16:33.197183] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.551 15:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.551 15:16:33 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:31:14.551 15:16:33 -- host/digest.sh@54 -- # local rw bs qd 00:31:14.551 15:16:33 -- host/digest.sh@56 -- # rw=randread 00:31:14.551 15:16:33 -- host/digest.sh@56 -- # bs=4096 00:31:14.551 15:16:33 -- host/digest.sh@56 -- # qd=128 00:31:14.551 15:16:33 -- host/digest.sh@58 -- # bperfpid=3470641 00:31:14.551 15:16:33 -- host/digest.sh@60 -- # waitforlisten 3470641 /var/tmp/bperf.sock 00:31:14.551 15:16:33 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:14.551 15:16:33 -- common/autotest_common.sh@819 -- # '[' -z 3470641 ']' 00:31:14.551 15:16:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:14.551 15:16:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:14.551 15:16:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:14.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:14.551 15:16:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:14.551 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:31:14.551 [2024-06-11 15:16:33.246242] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:14.551 [2024-06-11 15:16:33.246296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470641 ] 00:31:14.551 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.551 [2024-06-11 15:16:33.326392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.810 [2024-06-11 15:16:33.413961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.376 15:16:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:15.376 15:16:34 -- common/autotest_common.sh@852 -- # return 0 00:31:15.376 15:16:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:15.376 15:16:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:15.634 15:16:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:15.634 15:16:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.634 15:16:34 -- common/autotest_common.sh@10 -- # set +x 00:31:15.634 15:16:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.635 15:16:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:15.635 15:16:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:15.893 nvme0n1 00:31:15.893 15:16:34 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:15.893 15:16:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.893 15:16:34 -- common/autotest_common.sh@10 -- # set +x 00:31:16.152 15:16:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.152 15:16:34 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:16.152 15:16:34 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:16.152 Running I/O for 2 seconds... 00:31:16.153 [2024-06-11 15:16:34.869884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.153 [2024-06-11 15:16:34.869925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.153 [2024-06-11 15:16:34.869940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.153 [2024-06-11 15:16:34.886315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.153 [2024-06-11 15:16:34.886348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.153 [2024-06-11 15:16:34.886362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.153 [2024-06-11 15:16:34.898775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.153 [2024-06-11 15:16:34.898803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.153 [2024-06-11 15:16:34.898817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.153 [2024-06-11 15:16:34.912886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.153 [2024-06-11 15:16:34.912914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.153 [2024-06-11 15:16:34.912927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.153 [2024-06-11 15:16:34.925850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.153 [2024-06-11 15:16:34.925878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.153 [2024-06-11 15:16:34.925890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.153 [2024-06-11 15:16:34.938832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.153 [2024-06-11 15:16:34.938859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.153 [2024-06-11 15:16:34.938872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.153 [2024-06-11 15:16:34.951910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.153 [2024-06-11 15:16:34.951936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.153 [2024-06-11 15:16:34.951949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.153 [2024-06-11 15:16:34.965682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.153 [2024-06-11 15:16:34.965709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.153 [2024-06-11 15:16:34.965721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.153 [2024-06-11 15:16:34.978909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.153 [2024-06-11 15:16:34.978936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.153 [2024-06-11 15:16:34.978953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.153 [2024-06-11 15:16:34.991789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.153 [2024-06-11 15:16:34.991816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.153 [2024-06-11 15:16:34.991828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.005783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.005810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.005822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.018825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.018852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.018864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.031726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.031754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.031766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.044654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.044682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.044694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.058546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.058574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.058586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.071730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.071759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.071772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.084734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.084761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.084773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.099116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.099142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.099155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.111918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.111945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.111957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.124886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.124913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.124925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.138176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.138204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.138216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.151929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.151956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.151968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.165014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.165049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.165062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.178185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.178212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.178224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.191376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.191402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.191414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.205403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.205430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.205447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.218130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.218158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.218170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.231160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.231188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.231200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.412 [2024-06-11 15:16:35.244215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.412 [2024-06-11 15:16:35.244243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.412 [2024-06-11 15:16:35.244255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.258200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.258228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.258239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.271196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.271223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.271235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.284577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.284603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.284616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.298091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.298117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.298129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.311156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.311183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.311195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.324099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.324129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.324141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.338041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.338069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.338081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.351235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.351262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.351274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.364102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.364129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.364142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.377329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.377357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.377371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.390851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.390878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.390890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.403996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.404030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.404043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.417303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.417332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.417344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.431044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.431072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.431085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.442588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.442615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.442628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.457236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.457264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.457276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.470439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.470468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.470480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.483602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.483628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.672 [2024-06-11 15:16:35.483640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.672 [2024-06-11 15:16:35.497134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.672 [2024-06-11 15:16:35.497160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.673 [2024-06-11 15:16:35.497172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.673 [2024-06-11 15:16:35.510182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.673 [2024-06-11 15:16:35.510214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.673 [2024-06-11 15:16:35.510227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.523013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.523048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.523060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.537097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.537124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.537136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.550127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.550153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.550169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.562937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.562964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.562976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.576052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.576078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.576090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.589775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.589801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.589813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.602572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.602598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.602610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.615723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.615749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.615761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.629666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.629693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.629705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.642505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.642530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.642542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.655606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.655633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.655645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.669171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.669197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.669209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.682148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.682173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.682185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.695277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.695303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.695314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.709271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.709297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.709308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.722081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.722107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.722119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.734981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.735007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.735019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.748744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.748770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.748782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.933 [2024-06-11 15:16:35.761756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:16.933 [2024-06-11 15:16:35.761782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.933 [2024-06-11 15:16:35.761794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.774842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.774868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.774884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.788555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.788581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.788592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.801688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.801714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.801725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.814604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.814630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.814642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.828379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.828405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.828416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.841542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.841569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.841580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.854561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.854588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.854599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.868035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.868061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.868073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.881531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.881557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.881570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.894581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.894612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.894625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.907669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.907695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.907707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.921434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.921460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.921471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.934441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.934467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.934479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.947572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.947598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.947611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.961126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.961152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.193 [2024-06-11 15:16:35.961164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.193 [2024-06-11 15:16:35.974228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.193 [2024-06-11 15:16:35.974255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.194 [2024-06-11 15:16:35.974268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.194 [2024-06-11 15:16:35.987073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.194 [2024-06-11 15:16:35.987099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.194 [2024-06-11 15:16:35.987112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.194 [2024-06-11 15:16:36.001057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.194 [2024-06-11 15:16:36.001084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.194 [2024-06-11 15:16:36.001096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.194 [2024-06-11 15:16:36.014051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.194 [2024-06-11 15:16:36.014077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.194 [2024-06-11 15:16:36.014089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.194 [2024-06-11 15:16:36.027206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.194 [2024-06-11 15:16:36.027233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.194 [2024-06-11 15:16:36.027245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.453 [2024-06-11 15:16:36.039909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.453 [2024-06-11 15:16:36.039935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.453 [2024-06-11 15:16:36.039947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.453 [2024-06-11 15:16:36.053788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.453 [2024-06-11 15:16:36.053815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.453 [2024-06-11 15:16:36.053828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.453 [2024-06-11 15:16:36.066733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.453 [2024-06-11 15:16:36.066759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.453 [2024-06-11 15:16:36.066770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.453 [2024-06-11 15:16:36.079771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.453 [2024-06-11 15:16:36.079797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.453 [2024-06-11 15:16:36.079809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.453 [2024-06-11 15:16:36.093562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.453 [2024-06-11 15:16:36.093590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.453 [2024-06-11 15:16:36.093601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.106565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.106592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.106604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.119544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.119570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.119586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.133432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.133458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.133470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.146706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.146732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.146744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.159556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.159582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.159594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.172613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.172639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.172651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.186401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.186426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.186438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.199559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.199586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.199598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.213385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.213411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.213424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.226378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.226406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.226417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.239532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.239563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.239575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.252361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.252388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.252400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.266120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.266145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.266157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.279291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.279317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.279329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.454 [2024-06-11 15:16:36.292160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.454 [2024-06-11 15:16:36.292185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.454 [2024-06-11 15:16:36.292198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.306172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.306199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.306211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.319318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.319344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.319356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.332001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.332034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.332048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.345019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.345052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.345068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.359088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.359114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.359126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.371913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.371939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.371951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.384659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.384685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.384697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.397828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.397854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.397865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.411758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.411785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.411797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.424564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.424591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.424602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.438778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.438806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.438818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.451730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.451755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.451767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.464650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.464681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.464693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.477430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.477457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.477469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.491513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.491540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.491552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.504345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.504371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.504383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.517482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.517509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.517521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.531157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.715 [2024-06-11 15:16:36.531185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.715 [2024-06-11 15:16:36.531197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.715 [2024-06-11 15:16:36.543974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.716 [2024-06-11 15:16:36.544001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.716 [2024-06-11 15:16:36.544014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.975 [2024-06-11 15:16:36.557103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.975 [2024-06-11 15:16:36.557130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.975 [2024-06-11 15:16:36.557142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.975 [2024-06-11 15:16:36.570873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.975 [2024-06-11 15:16:36.570900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.975 [2024-06-11 15:16:36.570913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.975 [2024-06-11 15:16:36.583937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.975 [2024-06-11 15:16:36.583963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.975 [2024-06-11 15:16:36.583974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.975 [2024-06-11 15:16:36.597049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.975 [2024-06-11 15:16:36.597078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.975 [2024-06-11 15:16:36.597091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.975 [2024-06-11 15:16:36.610652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.975 [2024-06-11 15:16:36.610680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.975 [2024-06-11 15:16:36.610692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.975 [2024-06-11 15:16:36.623836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.975 [2024-06-11 15:16:36.623864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.623876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.636804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.636830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.636842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.650637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.650663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.650675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.663474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.663501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.663513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.676321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.676347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.676359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.690524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.690552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.690569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.703512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.703539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.703551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.716619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.716645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.716657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.730314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.730342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.730354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.743145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.743172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.743184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.756299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.756326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.756338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.769743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.769770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.769783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.782926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.782952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.782964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.795811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.795839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.795851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.976 [2024-06-11 15:16:36.809814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:17.976 [2024-06-11 15:16:36.809846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.976 [2024-06-11 15:16:36.809858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.236 [2024-06-11 15:16:36.822657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:18.236 [2024-06-11 15:16:36.822684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.236 [2024-06-11 15:16:36.822696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.236 [2024-06-11 15:16:36.835478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:18.236 [2024-06-11 15:16:36.835505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.236 [2024-06-11 15:16:36.835517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.236 [2024-06-11 15:16:36.848627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218ef00) 00:31:18.236 [2024-06-11 15:16:36.848654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.236 [2024-06-11 15:16:36.848666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.236 00:31:18.236 Latency(us) 00:31:18.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.236 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:18.236 nvme0n1 : 2.00 19189.90 74.96 0.00 0.00 6661.43 2993.80 18111.77 00:31:18.236 =================================================================================================================== 00:31:18.236 Total : 19189.90 74.96 0.00 0.00 6661.43 2993.80 18111.77 00:31:18.236 0 00:31:18.236 15:16:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:18.236 15:16:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:18.236 15:16:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:18.236 | .driver_specific 00:31:18.236 | .nvme_error 00:31:18.236 | .status_code 00:31:18.236 | .command_transient_transport_error' 00:31:18.236 15:16:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:18.495 15:16:37 -- host/digest.sh@71 -- # (( 150 > 0 )) 00:31:18.495 15:16:37 -- host/digest.sh@73 -- # killprocess 3470641 00:31:18.496 15:16:37 -- common/autotest_common.sh@926 -- # '[' -z 3470641 ']' 00:31:18.496 15:16:37 -- common/autotest_common.sh@930 -- # kill -0 3470641 00:31:18.496 15:16:37 -- common/autotest_common.sh@931 -- # uname 00:31:18.496 15:16:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:18.496 15:16:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3470641 00:31:18.496 15:16:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:18.496 15:16:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:18.496 15:16:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3470641' 00:31:18.496 killing process with pid 3470641 00:31:18.496 15:16:37 -- common/autotest_common.sh@945 -- # kill 3470641 00:31:18.496 Received shutdown signal, test time was about 2.000000 seconds 00:31:18.496 00:31:18.496 Latency(us) 00:31:18.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.496 =================================================================================================================== 00:31:18.496 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:18.496 15:16:37 -- common/autotest_common.sh@950 -- # wait 3470641 00:31:18.755 15:16:37 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:31:18.755 15:16:37 -- host/digest.sh@54 -- # local rw bs qd 00:31:18.755 15:16:37 -- host/digest.sh@56 -- # rw=randread 00:31:18.755 15:16:37 -- host/digest.sh@56 -- # bs=131072 00:31:18.755 15:16:37 -- host/digest.sh@56 -- # qd=16 00:31:18.755 15:16:37 -- host/digest.sh@58 -- # bperfpid=3471443 00:31:18.755 15:16:37 -- host/digest.sh@60 -- # waitforlisten 3471443 /var/tmp/bperf.sock 00:31:18.755 15:16:37 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:18.755 15:16:37 -- common/autotest_common.sh@819 -- # '[' -z 3471443 ']' 00:31:18.755 15:16:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:18.755 15:16:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:18.755 15:16:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:18.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:18.755 15:16:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:18.755 15:16:37 -- common/autotest_common.sh@10 -- # set +x 00:31:18.755 [2024-06-11 15:16:37.437396] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:18.755 [2024-06-11 15:16:37.437455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471443 ] 00:31:18.755 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:18.755 Zero copy mechanism will not be used. 00:31:18.755 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.755 [2024-06-11 15:16:37.521419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.015 [2024-06-11 15:16:37.602022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.583 15:16:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:19.583 15:16:38 -- common/autotest_common.sh@852 -- # return 0 00:31:19.583 15:16:38 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:19.583 15:16:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:19.842 15:16:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:19.842 15:16:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.842 15:16:38 -- common/autotest_common.sh@10 -- # set +x 00:31:19.842 15:16:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.842 15:16:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:19.842 15:16:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:20.101 nvme0n1 00:31:20.101 15:16:38 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:20.101 15:16:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.101 15:16:38 -- common/autotest_common.sh@10 -- # set +x 00:31:20.101 15:16:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.101 15:16:38 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:20.101 15:16:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:20.359 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:20.359 Zero copy mechanism will not be used. 00:31:20.359 Running I/O for 2 seconds... 00:31:20.359 [2024-06-11 15:16:39.030518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.030561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.030576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.044426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.044456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.044469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.057716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.057746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.057759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.069170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.069200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.069212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.081924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.081954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.081967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.093727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.093755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.093767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.105126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.105154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.105167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.116790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.116818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.116830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.127991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.128018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.128038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.140003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.140037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.140055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.153609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.153637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.153650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.165756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.165784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.165797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.178223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.178250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.178263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.359 [2024-06-11 15:16:39.189544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.359 [2024-06-11 15:16:39.189571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.359 [2024-06-11 15:16:39.189583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.200949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.200976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.200988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.212704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.212732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.212744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.224040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.224067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.224079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.235180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.235206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.235219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.246897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.246925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.246937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.258135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.258161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.258173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.269471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.269499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.269512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.280253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.280279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.280291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.291791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.291819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.291831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.303669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.303697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.303709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.315299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.315326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.315339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.326909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.326936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.326948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.338611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.338639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.338656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.350064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.350090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.350102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.361322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.361349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.619 [2024-06-11 15:16:39.361361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.619 [2024-06-11 15:16:39.372466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.619 [2024-06-11 15:16:39.372493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.620 [2024-06-11 15:16:39.372506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.620 [2024-06-11 15:16:39.383251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.620 [2024-06-11 15:16:39.383277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.620 [2024-06-11 15:16:39.383289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.620 [2024-06-11 15:16:39.394519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.620 [2024-06-11 15:16:39.394547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.620 [2024-06-11 15:16:39.394560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.620 [2024-06-11 15:16:39.405302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.620 [2024-06-11 15:16:39.405330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.620 [2024-06-11 15:16:39.405342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.620 [2024-06-11 15:16:39.416387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.620 [2024-06-11 15:16:39.416414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.620 [2024-06-11 15:16:39.416427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.620 [2024-06-11 15:16:39.427710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.620 [2024-06-11 15:16:39.427738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.620 [2024-06-11 15:16:39.427751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.620 [2024-06-11 15:16:39.438157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.620 [2024-06-11 15:16:39.438188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.620 [2024-06-11 15:16:39.438200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.620 [2024-06-11 15:16:39.449197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.620 [2024-06-11 15:16:39.449225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.620 [2024-06-11 15:16:39.449238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.879 [2024-06-11 15:16:39.460707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.879 [2024-06-11 15:16:39.460734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.879 [2024-06-11 15:16:39.460747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.879 [2024-06-11 15:16:39.473443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.879 [2024-06-11 15:16:39.473471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.879 [2024-06-11 15:16:39.473483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.879 [2024-06-11 15:16:39.484621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.879 [2024-06-11 15:16:39.484649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.879 [2024-06-11 15:16:39.484662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.879 [2024-06-11 15:16:39.496081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.879 [2024-06-11 15:16:39.496109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.879 [2024-06-11 15:16:39.496122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.879 [2024-06-11 15:16:39.507268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.879 [2024-06-11 15:16:39.507294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.879 [2024-06-11 15:16:39.507307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.879 [2024-06-11 15:16:39.518744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.879 [2024-06-11 15:16:39.518772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.879 [2024-06-11 15:16:39.518784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.879 [2024-06-11 15:16:39.529105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.879 [2024-06-11 15:16:39.529131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.879 [2024-06-11 15:16:39.529143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.879 [2024-06-11 15:16:39.540581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.879 [2024-06-11 15:16:39.540609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.879 [2024-06-11 15:16:39.540621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.879 [2024-06-11 15:16:39.551951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.879 [2024-06-11 15:16:39.551978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.879 [2024-06-11 15:16:39.551991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.879 [2024-06-11 15:16:39.562805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.879 [2024-06-11 15:16:39.562832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.562844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.574092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.574119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.574132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.584932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.584959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.584971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.596981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.597009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.597021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.608062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.608090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.608102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.618810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.618837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.618850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.629711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.629747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.629760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.641270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.641299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.641312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.652546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.652575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.652587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.663980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.664008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.664021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.675588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.675616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.675628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.686231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.686259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.686271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.697014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.697050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.697063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.708244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.708271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.708284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.880 [2024-06-11 15:16:39.719702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:20.880 [2024-06-11 15:16:39.719729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.880 [2024-06-11 15:16:39.719742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.730577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.730604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.730616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.741346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.741373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.741386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.753075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.753102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.753115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.764433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.764460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.764473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.776108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.776137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.776151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.787813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.787841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.787854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.799182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.799209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.799222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.810569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.810597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.810610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.821939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.821968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.821985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.834420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.834448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.834460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.845810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.845839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.845851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.856692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.856719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.856731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.867996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.868030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.868044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.879588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.879617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.879630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.890621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.890649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.890662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.901346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.901371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.901384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.912131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.912157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.912169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.923609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.923641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.140 [2024-06-11 15:16:39.923654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.140 [2024-06-11 15:16:39.935146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.140 [2024-06-11 15:16:39.935173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.141 [2024-06-11 15:16:39.935185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.141 [2024-06-11 15:16:39.946487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.141 [2024-06-11 15:16:39.946515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.141 [2024-06-11 15:16:39.946527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.141 [2024-06-11 15:16:39.958582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.141 [2024-06-11 15:16:39.958611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.141 [2024-06-11 15:16:39.958623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.141 [2024-06-11 15:16:39.970847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.141 [2024-06-11 15:16:39.970874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.141 [2024-06-11 15:16:39.970887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:39.981157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:39.981184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:39.981197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:39.992514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:39.992543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:39.992555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.005196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.005229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.005242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.018673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.018705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.018719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.030141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.030170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.030182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.041300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.041328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.041341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.052832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.052860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.052872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.065505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.065534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.065546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.079161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.079191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.079204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.090340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.090368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.090380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.101404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.101432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.101444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.113157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.113184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.113196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.125017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.125056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.125068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.136779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.136806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.136818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.148687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.148715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.148726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.163883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.163909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.163920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.176746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.176773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.176784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.189638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.189665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.400 [2024-06-11 15:16:40.189676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.400 [2024-06-11 15:16:40.205927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.400 [2024-06-11 15:16:40.205954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.401 [2024-06-11 15:16:40.205966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.401 [2024-06-11 15:16:40.219601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.401 [2024-06-11 15:16:40.219627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.401 [2024-06-11 15:16:40.219639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.401 [2024-06-11 15:16:40.234536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.401 [2024-06-11 15:16:40.234563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.401 [2024-06-11 15:16:40.234575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.246767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.246794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.246805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.259295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.259323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.259335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.271005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.271038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.271050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.286055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.286081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.286092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.298850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.298877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.298888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.310274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.310300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.310312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.321872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.321898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.321909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.336723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.336749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.336761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.349113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.349139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.349155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.361171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.361197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.361209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.372747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.372773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.372785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.387785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.387811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.387823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.401042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.401069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.401081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.417821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.417847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.417859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.432243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.432270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.432281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.448364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.660 [2024-06-11 15:16:40.448392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.660 [2024-06-11 15:16:40.448403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.660 [2024-06-11 15:16:40.463127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.661 [2024-06-11 15:16:40.463153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.661 [2024-06-11 15:16:40.463165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.661 [2024-06-11 15:16:40.475867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.661 [2024-06-11 15:16:40.475897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.661 [2024-06-11 15:16:40.475910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.661 [2024-06-11 15:16:40.488540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.661 [2024-06-11 15:16:40.488568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.661 [2024-06-11 15:16:40.488580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.661 [2024-06-11 15:16:40.500550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.661 [2024-06-11 15:16:40.500579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.661 [2024-06-11 15:16:40.500592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.512007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.512041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.512054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.526062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.526089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.526101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.537418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.537446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.537458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.552501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.552529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.552541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.565991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.566019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.566037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.582676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.582704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.582720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.595840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.595867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.595879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.611014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.611048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.611059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.624288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.624315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.624327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.636993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.637021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.637041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.654810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.654837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.654848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.670632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.920 [2024-06-11 15:16:40.670660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.920 [2024-06-11 15:16:40.670672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.920 [2024-06-11 15:16:40.686309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.921 [2024-06-11 15:16:40.686337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.921 [2024-06-11 15:16:40.686349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.921 [2024-06-11 15:16:40.702462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.921 [2024-06-11 15:16:40.702489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.921 [2024-06-11 15:16:40.702501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.921 [2024-06-11 15:16:40.715244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.921 [2024-06-11 15:16:40.715276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.921 [2024-06-11 15:16:40.715288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.921 [2024-06-11 15:16:40.726151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.921 [2024-06-11 15:16:40.726178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.921 [2024-06-11 15:16:40.726189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.921 [2024-06-11 15:16:40.736371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.921 [2024-06-11 15:16:40.736397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.921 [2024-06-11 15:16:40.736409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.921 [2024-06-11 15:16:40.746619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.921 [2024-06-11 15:16:40.746646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.921 [2024-06-11 15:16:40.746657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.921 [2024-06-11 15:16:40.756791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:21.921 [2024-06-11 15:16:40.756817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.921 [2024-06-11 15:16:40.756829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.180 [2024-06-11 15:16:40.767038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.180 [2024-06-11 15:16:40.767063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.180 [2024-06-11 15:16:40.767075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.180 [2024-06-11 15:16:40.777352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.180 [2024-06-11 15:16:40.777378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.180 [2024-06-11 15:16:40.777390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.180 [2024-06-11 15:16:40.787505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.180 [2024-06-11 15:16:40.787532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.180 [2024-06-11 15:16:40.787543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.180 [2024-06-11 15:16:40.797807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.180 [2024-06-11 15:16:40.797834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.180 [2024-06-11 15:16:40.797845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.180 [2024-06-11 15:16:40.807919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.180 [2024-06-11 15:16:40.807945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.807956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.818132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.818158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.818170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.828313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.828339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.828350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.838542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.838570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.838582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.848709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.848735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.848746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.858851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.858877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.858889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.869050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.869077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.869089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.879398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.879424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.879436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.889561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.889588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.889604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.899700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.899726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.899738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.909809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.909835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.909847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.920039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.920065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.920076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.930204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.930230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.930241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.940394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.940421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.940432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.950567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.950593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.950604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.960760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.960786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.960797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.970894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.970920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.970931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.981108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.981138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.981149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:40.991297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:40.991322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:40.991334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:41.001441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:41.001466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:41.001478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.181 [2024-06-11 15:16:41.011561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1095820) 00:31:22.181 [2024-06-11 15:16:41.011587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.181 [2024-06-11 15:16:41.011599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.181 00:31:22.181 Latency(us) 00:31:22.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.181 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:22.181 nvme0n1 : 2.01 2605.43 325.68 0.00 0.00 6136.51 4468.36 16681.89 00:31:22.181 =================================================================================================================== 00:31:22.181 Total : 2605.43 325.68 0.00 0.00 6136.51 4468.36 16681.89 00:31:22.181 0 00:31:22.440 15:16:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:22.440 15:16:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:22.440 15:16:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:22.440 | .driver_specific 00:31:22.440 | .nvme_error 00:31:22.440 | .status_code 00:31:22.440 | .command_transient_transport_error' 00:31:22.440 15:16:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:22.440 15:16:41 -- host/digest.sh@71 -- # (( 168 > 0 )) 00:31:22.440 15:16:41 -- host/digest.sh@73 -- # killprocess 3471443 00:31:22.440 15:16:41 -- common/autotest_common.sh@926 -- # '[' -z 3471443 ']' 00:31:22.440 15:16:41 -- common/autotest_common.sh@930 -- # kill -0 3471443 00:31:22.440 15:16:41 -- common/autotest_common.sh@931 -- # uname 00:31:22.440 15:16:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:22.700 15:16:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3471443 00:31:22.700 15:16:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:22.700 15:16:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:22.700 15:16:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3471443' 00:31:22.700 killing process with pid 3471443 00:31:22.700 15:16:41 -- common/autotest_common.sh@945 -- # kill 3471443 00:31:22.700 Received shutdown signal, test time was about 2.000000 seconds 00:31:22.700 00:31:22.700 Latency(us) 00:31:22.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.700 =================================================================================================================== 00:31:22.700 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:22.700 15:16:41 -- common/autotest_common.sh@950 -- # wait 3471443 00:31:22.959 15:16:41 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:31:22.959 15:16:41 -- host/digest.sh@54 -- # local rw bs qd 00:31:22.959 15:16:41 -- host/digest.sh@56 -- # rw=randwrite 00:31:22.959 15:16:41 -- host/digest.sh@56 -- # bs=4096 00:31:22.959 15:16:41 -- host/digest.sh@56 -- # qd=128 00:31:22.959 15:16:41 -- host/digest.sh@58 -- # bperfpid=3472218 00:31:22.959 15:16:41 -- host/digest.sh@60 -- # waitforlisten 3472218 /var/tmp/bperf.sock 00:31:22.959 15:16:41 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:22.959 15:16:41 -- common/autotest_common.sh@819 -- # '[' -z 3472218 ']' 00:31:22.959 15:16:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:22.959 15:16:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:22.959 15:16:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:22.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:22.959 15:16:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:22.959 15:16:41 -- common/autotest_common.sh@10 -- # set +x 00:31:22.959 [2024-06-11 15:16:41.591335] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:22.959 [2024-06-11 15:16:41.591397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472218 ] 00:31:22.959 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.959 [2024-06-11 15:16:41.673956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.959 [2024-06-11 15:16:41.755276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.893 15:16:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:23.893 15:16:42 -- common/autotest_common.sh@852 -- # return 0 00:31:23.893 15:16:42 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:23.893 15:16:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:24.152 15:16:42 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:24.152 15:16:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.152 15:16:42 -- common/autotest_common.sh@10 -- # set +x 00:31:24.152 15:16:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.152 15:16:42 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:24.152 15:16:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:24.411 nvme0n1 00:31:24.411 15:16:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:24.411 15:16:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.411 15:16:43 -- common/autotest_common.sh@10 -- # set +x 00:31:24.411 15:16:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.411 15:16:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:24.411 15:16:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:24.671 Running I/O for 2 seconds... 00:31:24.671 [2024-06-11 15:16:43.330052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.330355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.330392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.344113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.344410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.344440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.358204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.358490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.358515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.372403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.372687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.372713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.386630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.386918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.386943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.400743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.401035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.401060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.414917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.415201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.415226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.429014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.429308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.429332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.443169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.443457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.443481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.457290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.457574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.457599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.471443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.471727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.471752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.485553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.485836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.485861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.671 [2024-06-11 15:16:43.499717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.671 [2024-06-11 15:16:43.499999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.671 [2024-06-11 15:16:43.500023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.513838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.514123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.514148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.528008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.528304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.528327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.542122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.542404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.542428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.556271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.556560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.556586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.570422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.570705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.570729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.584543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.584826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.584855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.598700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.598984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.599009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.612870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.613159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.613184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.626980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.627265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.627290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.641121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.641405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.641430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.655235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.655526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.655549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.669402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.669693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.669717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.683530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.683814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.683839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.697649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.697933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.697959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.711774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.712066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.712091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.725902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.726191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.726215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.931 [2024-06-11 15:16:43.740008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.931 [2024-06-11 15:16:43.740299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.931 [2024-06-11 15:16:43.740324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.932 [2024-06-11 15:16:43.754147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.932 [2024-06-11 15:16:43.754432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.932 [2024-06-11 15:16:43.754456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.932 [2024-06-11 15:16:43.768256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:24.932 [2024-06-11 15:16:43.768543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.932 [2024-06-11 15:16:43.768568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.782388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.782673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.782697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.796538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.796822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.796846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.810643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.810925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.810949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.824767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.825062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.825087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.838884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.839169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.839194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.853005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.853296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.853320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.867404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.867696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.867721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.881525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.881811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.881835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.895668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.895949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.895973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.909804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.910093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.910117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.923935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.924225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.924249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.938070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.938356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.938379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.191 [2024-06-11 15:16:43.952210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.191 [2024-06-11 15:16:43.952493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.191 [2024-06-11 15:16:43.952520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.192 [2024-06-11 15:16:43.966331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.192 [2024-06-11 15:16:43.966616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.192 [2024-06-11 15:16:43.966641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.192 [2024-06-11 15:16:43.980429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.192 [2024-06-11 15:16:43.980711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.192 [2024-06-11 15:16:43.980734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.192 [2024-06-11 15:16:43.994591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.192 [2024-06-11 15:16:43.994872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.192 [2024-06-11 15:16:43.994897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.192 [2024-06-11 15:16:44.008697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.192 [2024-06-11 15:16:44.008987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.192 [2024-06-11 15:16:44.009012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.192 [2024-06-11 15:16:44.022855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.192 [2024-06-11 15:16:44.023145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.192 [2024-06-11 15:16:44.023169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.451 [2024-06-11 15:16:44.036967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.451 [2024-06-11 15:16:44.037249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.451 [2024-06-11 15:16:44.037274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.451 [2024-06-11 15:16:44.051129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.451 [2024-06-11 15:16:44.051417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.451 [2024-06-11 15:16:44.051441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.065290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.065575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.065600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.079419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.079700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.079727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.093538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.093828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.093853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.107685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.107970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.107995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.121783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.122071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.122096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.135964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.136252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.136277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.150072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.150358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.150383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.164236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.164521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.164546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.178398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.178680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.178705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.192548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.192953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.192978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.206694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.206980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.207005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.220961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.221251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.221276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.235096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.235380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.235404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.249262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.249555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.249579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.263403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.263690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.263714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.277549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.277835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.277859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.452 [2024-06-11 15:16:44.291734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.452 [2024-06-11 15:16:44.292017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.452 [2024-06-11 15:16:44.292046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.305852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.306142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.306167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.320012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.320302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.320326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.334166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.334458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.334481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.348316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.348605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.348629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.362474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.362756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.362780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.376596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.376886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.376910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.390759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.391050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.391075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.404930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.405214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.405238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.419058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.419345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.419369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.433198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.433481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.433505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.447314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.447602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.447630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.461468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.461751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.461776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.475622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.475904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.475928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.489739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.490023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.490054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.503897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.504183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.504209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.518056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.518342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.518367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.532182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.532466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.532491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.712 [2024-06-11 15:16:44.546322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.712 [2024-06-11 15:16:44.546603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.712 [2024-06-11 15:16:44.546627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.560450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.560737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.560761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.574578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.574874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.574899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.588702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.588988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.589013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.602853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.603143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.603167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.616998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.617290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.617315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.631138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.631423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.631447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.645234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.645518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.645542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.659394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.659677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.659700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.673505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.673790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.673814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.687658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.687942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.687968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.701798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.702090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.702115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.715913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.716203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.716228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.730019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.730315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.730339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.744184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.744476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.744500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.758279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.758562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.758586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.772439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.772725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.772749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.786552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.786839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.786864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.973 [2024-06-11 15:16:44.800683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:25.973 [2024-06-11 15:16:44.800968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.973 [2024-06-11 15:16:44.800992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.233 [2024-06-11 15:16:44.814823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.233 [2024-06-11 15:16:44.815112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.233 [2024-06-11 15:16:44.815141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.233 [2024-06-11 15:16:44.828944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.233 [2024-06-11 15:16:44.829233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.233 [2024-06-11 15:16:44.829258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.233 [2024-06-11 15:16:44.843061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.843349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.843373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.857166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.857450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.857474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.871527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.871818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.871843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.885660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.885940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.885965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.899799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.900090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.900115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.913894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.914178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.914202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.928017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.928307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.928331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.942140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.942440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.942465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.956289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.956572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.956597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.970415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.970698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.970722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.984519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.984809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.984835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:44.998653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:44.998941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:44.998965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:45.012787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:45.013078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:45.013103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:45.026913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:45.027195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:45.027220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:45.041045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:45.041332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:45.041356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:45.055164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:45.055446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:45.055472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.234 [2024-06-11 15:16:45.069269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.234 [2024-06-11 15:16:45.069554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.234 [2024-06-11 15:16:45.069579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.083607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.083893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.083918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.097710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.097994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.098018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.111830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.112117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.112142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.125960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.126246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.126270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.140071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.140353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.140377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.154205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.154489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.154513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.168307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.168588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.168611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.182406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.182690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.182720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.196525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.196806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.196830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.210654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.210937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.210961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.224805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.225092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.225116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.239009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.239302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.239327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.253111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.253396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.253420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.267228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.267511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.267536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.281338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.281625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.281649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.295499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.295788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.295812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 [2024-06-11 15:16:45.309593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49930) with pdu=0x2000190f4f40 00:31:26.494 [2024-06-11 15:16:45.309881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.494 [2024-06-11 15:16:45.309909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.494 00:31:26.494 Latency(us) 00:31:26.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.494 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:26.494 nvme0n1 : 2.01 18032.28 70.44 0.00 0.00 7082.80 4855.62 14239.19 00:31:26.494 =================================================================================================================== 00:31:26.494 Total : 18032.28 70.44 0.00 0.00 7082.80 4855.62 14239.19 00:31:26.494 0 00:31:26.754 15:16:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:26.754 15:16:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:26.754 15:16:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:26.754 | .driver_specific 00:31:26.754 | .nvme_error 00:31:26.754 | .status_code 00:31:26.754 | .command_transient_transport_error' 00:31:26.754 15:16:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:26.754 15:16:45 -- host/digest.sh@71 -- # (( 141 > 0 )) 00:31:26.754 15:16:45 -- host/digest.sh@73 -- # killprocess 3472218 00:31:26.754 15:16:45 -- common/autotest_common.sh@926 -- # '[' -z 3472218 ']' 00:31:26.754 15:16:45 -- common/autotest_common.sh@930 -- # kill -0 3472218 00:31:26.754 15:16:45 -- common/autotest_common.sh@931 -- # uname 00:31:26.754 15:16:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:27.013 15:16:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3472218 00:31:27.013 15:16:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:27.013 15:16:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:27.013 15:16:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3472218' 00:31:27.013 killing process with pid 3472218 00:31:27.013 15:16:45 -- common/autotest_common.sh@945 -- # kill 3472218 00:31:27.013 Received shutdown signal, test time was about 2.000000 seconds 00:31:27.013 00:31:27.013 Latency(us) 00:31:27.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.013 =================================================================================================================== 00:31:27.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:27.013 15:16:45 -- common/autotest_common.sh@950 -- # wait 3472218 00:31:27.273 15:16:45 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:31:27.273 15:16:45 -- host/digest.sh@54 -- # local rw bs qd 00:31:27.273 15:16:45 -- host/digest.sh@56 -- # rw=randwrite 00:31:27.273 15:16:45 -- host/digest.sh@56 -- # bs=131072 00:31:27.273 15:16:45 -- host/digest.sh@56 -- # qd=16 00:31:27.273 15:16:45 -- host/digest.sh@58 -- # bperfpid=3472895 00:31:27.273 15:16:45 -- host/digest.sh@60 -- # waitforlisten 3472895 /var/tmp/bperf.sock 00:31:27.273 15:16:45 -- common/autotest_common.sh@819 -- # '[' -z 3472895 ']' 00:31:27.273 15:16:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:27.273 15:16:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:27.273 15:16:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:27.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:27.273 15:16:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:27.273 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:31:27.273 15:16:45 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:27.273 [2024-06-11 15:16:45.907510] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:27.273 [2024-06-11 15:16:45.907569] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472895 ] 00:31:27.273 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:27.273 Zero copy mechanism will not be used. 00:31:27.273 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.273 [2024-06-11 15:16:45.988906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.273 [2024-06-11 15:16:46.076068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.211 15:16:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:28.211 15:16:46 -- common/autotest_common.sh@852 -- # return 0 00:31:28.211 15:16:46 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:28.211 15:16:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:28.470 15:16:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:28.470 15:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:28.470 15:16:47 -- common/autotest_common.sh@10 -- # set +x 00:31:28.470 15:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:28.470 15:16:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:28.470 15:16:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:28.729 nvme0n1 00:31:28.729 15:16:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:28.729 15:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:28.729 15:16:47 -- common/autotest_common.sh@10 -- # set +x 00:31:28.729 15:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:28.729 15:16:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:28.729 15:16:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:28.729 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:28.729 Zero copy mechanism will not be used. 00:31:28.729 Running I/O for 2 seconds... 00:31:28.729 [2024-06-11 15:16:47.565013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.729 [2024-06-11 15:16:47.565311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.729 [2024-06-11 15:16:47.565347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.989 [2024-06-11 15:16:47.576504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.989 [2024-06-11 15:16:47.576699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.989 [2024-06-11 15:16:47.576729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.989 [2024-06-11 15:16:47.586008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.989 [2024-06-11 15:16:47.586176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.989 [2024-06-11 15:16:47.586202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.989 [2024-06-11 15:16:47.596621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.989 [2024-06-11 15:16:47.596770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.989 [2024-06-11 15:16:47.596796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.989 [2024-06-11 15:16:47.605723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.989 [2024-06-11 15:16:47.605872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.989 [2024-06-11 15:16:47.605898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.989 [2024-06-11 15:16:47.615707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.989 [2024-06-11 15:16:47.616015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.989 [2024-06-11 15:16:47.616050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.625137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.625373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.625398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.635877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.636302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.636328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.646317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.646583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.646609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.656082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.656408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.656434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.665744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.666001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.666045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.675272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.675544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.675570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.685667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.685845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.685874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.695785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.695962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.695986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.705263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.705593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.705619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.715727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.716103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.716130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.725666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.726066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.726091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.735452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.735674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.735699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.745838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.746096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.746122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.757079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.757281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.757304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.767558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.767921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.767945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.777645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.777900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.777929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.787058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.787275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.787302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.797418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.797671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.797695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.806975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.807280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.807304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.817729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.818019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.818052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:28.990 [2024-06-11 15:16:47.828765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:28.990 [2024-06-11 15:16:47.829034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.990 [2024-06-11 15:16:47.829060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.250 [2024-06-11 15:16:47.838550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.250 [2024-06-11 15:16:47.838699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.250 [2024-06-11 15:16:47.838722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.250 [2024-06-11 15:16:47.849365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.250 [2024-06-11 15:16:47.849561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.250 [2024-06-11 15:16:47.849584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.250 [2024-06-11 15:16:47.859210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.250 [2024-06-11 15:16:47.859551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.250 [2024-06-11 15:16:47.859575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.250 [2024-06-11 15:16:47.868756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.250 [2024-06-11 15:16:47.869052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.250 [2024-06-11 15:16:47.869078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.250 [2024-06-11 15:16:47.878545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.250 [2024-06-11 15:16:47.878978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.250 [2024-06-11 15:16:47.879004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.250 [2024-06-11 15:16:47.888454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.250 [2024-06-11 15:16:47.888796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.250 [2024-06-11 15:16:47.888823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.250 [2024-06-11 15:16:47.898119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.250 [2024-06-11 15:16:47.898403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.250 [2024-06-11 15:16:47.898428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.250 [2024-06-11 15:16:47.907923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.250 [2024-06-11 15:16:47.908205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.250 [2024-06-11 15:16:47.908229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.250 [2024-06-11 15:16:47.917784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.250 [2024-06-11 15:16:47.917905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.250 [2024-06-11 15:16:47.917929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.250 [2024-06-11 15:16:47.927359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.250 [2024-06-11 15:16:47.927605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.250 [2024-06-11 15:16:47.927629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:47.937475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:47.937844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:47.937868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:47.947903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:47.948079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:47.948102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:47.957880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:47.958354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:47.958379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:47.968132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:47.968355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:47.968380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:47.977951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:47.978277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:47.978302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:47.988543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:47.988777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:47.988803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:47.999352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:47.999602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:47.999627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:48.009282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:48.009491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:48.009515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:48.020347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:48.020751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:48.020776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:48.033376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:48.033616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:48.033640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:48.044559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:48.044735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:48.044766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:48.055624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:48.055910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:48.055935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:48.065449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:48.065694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:48.065719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:48.076110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:48.076369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:48.076394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.251 [2024-06-11 15:16:48.089132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.251 [2024-06-11 15:16:48.089451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.251 [2024-06-11 15:16:48.089476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.099371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.099669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.099694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.109223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.109621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.109645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.118722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.118975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.119000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.128948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.129348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.129373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.138867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.139306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.139331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.149262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.149718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.149742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.160125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.160263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.160286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.169780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.170060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.170085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.179601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.179855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.179880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.190142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.190434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.190458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.200186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.200452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.200476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.211363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.211896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.211920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.221749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.222152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.222178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.231645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.511 [2024-06-11 15:16:48.231964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.511 [2024-06-11 15:16:48.231989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.511 [2024-06-11 15:16:48.241320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.241585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.241609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.251133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.251283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.251307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.260206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.260404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.260427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.269658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.269975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.269999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.280051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.280312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.280337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.289369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.289867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.289892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.299745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.300052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.300077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.309409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.309649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.309679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.319560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.319877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.319902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.328391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.328633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.328656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.338690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.338811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.338834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.512 [2024-06-11 15:16:48.348775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.512 [2024-06-11 15:16:48.349004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.512 [2024-06-11 15:16:48.349034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.358972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.359222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.359247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.369087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.369548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.369573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.378163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.378410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.378434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.386894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.387298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.387323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.397165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.397472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.397497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.406249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.406480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.406506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.414971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.415222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.415246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.424899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.425225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.425250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.434280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.434587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.434612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.443863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.444247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.444272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.453613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.453961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.453985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.463742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.464032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.464056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.772 [2024-06-11 15:16:48.473829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.772 [2024-06-11 15:16:48.474038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.772 [2024-06-11 15:16:48.474062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.483067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.483347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.483371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.493493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.493790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.493815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.503018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.503395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.503420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.513208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.513459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.513483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.523816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.524194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.524219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.533488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.533791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.533815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.543872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.544100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.544124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.553901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.554106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.554129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.563747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.564040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.564069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.573823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.573947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.573971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.583783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.583974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.583999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.595525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.595749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.595774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:29.773 [2024-06-11 15:16:48.605526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:29.773 [2024-06-11 15:16:48.605880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.773 [2024-06-11 15:16:48.605905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.033 [2024-06-11 15:16:48.615858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.033 [2024-06-11 15:16:48.616281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.033 [2024-06-11 15:16:48.616306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.033 [2024-06-11 15:16:48.626200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.033 [2024-06-11 15:16:48.626620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.033 [2024-06-11 15:16:48.626645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.033 [2024-06-11 15:16:48.635774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.033 [2024-06-11 15:16:48.636213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.033 [2024-06-11 15:16:48.636237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.033 [2024-06-11 15:16:48.646080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.033 [2024-06-11 15:16:48.646326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.033 [2024-06-11 15:16:48.646350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.033 [2024-06-11 15:16:48.655061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.033 [2024-06-11 15:16:48.655479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.033 [2024-06-11 15:16:48.655503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.033 [2024-06-11 15:16:48.665956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.033 [2024-06-11 15:16:48.666166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.033 [2024-06-11 15:16:48.666190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.033 [2024-06-11 15:16:48.676652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.033 [2024-06-11 15:16:48.677038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.033 [2024-06-11 15:16:48.677063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.033 [2024-06-11 15:16:48.687239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.033 [2024-06-11 15:16:48.687516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.033 [2024-06-11 15:16:48.687543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.033 [2024-06-11 15:16:48.696458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.033 [2024-06-11 15:16:48.696743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.696768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.707462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.707828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.707853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.718204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.718583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.718610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.729315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.729468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.729492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.739104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.739376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.739402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.749357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.749603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.749628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.760123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.760467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.760492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.770567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.770959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.770985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.780334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.780637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.780663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.791609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.791961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.791988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.801649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.801872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.801897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.811547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.811736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.811759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.822150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.822495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.822521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.832535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.832773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.832803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.842406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.842654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.842679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.851626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.851975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.852000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.860426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.860790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.860816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.034 [2024-06-11 15:16:48.869952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.034 [2024-06-11 15:16:48.870253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.034 [2024-06-11 15:16:48.870277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.879665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.879957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.879982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.889490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.889829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.889854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.899634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.899929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.899954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.909529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.909802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.909827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.920032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.920283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.920307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.929740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.930095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.930121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.939707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.939959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.939983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.950399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.950704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.950729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.961124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.961328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.961352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.971834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.972171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.972196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.981546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.981814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.294 [2024-06-11 15:16:48.981839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.294 [2024-06-11 15:16:48.991716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.294 [2024-06-11 15:16:48.991973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:48.991997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.002398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.002602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.002624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.011666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.012106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.012132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.022106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.022453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.022478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.032330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.032590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.032617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.042861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.043161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.043186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.053657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.053869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.053894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.063960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.064369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.064394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.074633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.074902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.074926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.084891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.085101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.085126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.094541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.094916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.094947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.105313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.105650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.105675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.115122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.115325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.115348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.124484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.124775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.124801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.295 [2024-06-11 15:16:49.134142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.295 [2024-06-11 15:16:49.134454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.295 [2024-06-11 15:16:49.134478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.556 [2024-06-11 15:16:49.144384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.556 [2024-06-11 15:16:49.144738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.556 [2024-06-11 15:16:49.144764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.556 [2024-06-11 15:16:49.154796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.556 [2024-06-11 15:16:49.155166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.556 [2024-06-11 15:16:49.155191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.556 [2024-06-11 15:16:49.163897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.556 [2024-06-11 15:16:49.164235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.556 [2024-06-11 15:16:49.164260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.556 [2024-06-11 15:16:49.173853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.556 [2024-06-11 15:16:49.174126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.556 [2024-06-11 15:16:49.174150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.556 [2024-06-11 15:16:49.183634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.556 [2024-06-11 15:16:49.184017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.556 [2024-06-11 15:16:49.184050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.556 [2024-06-11 15:16:49.193607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.556 [2024-06-11 15:16:49.193894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.556 [2024-06-11 15:16:49.193919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.556 [2024-06-11 15:16:49.204548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.556 [2024-06-11 15:16:49.204753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.556 [2024-06-11 15:16:49.204776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.556 [2024-06-11 15:16:49.213627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.556 [2024-06-11 15:16:49.213981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.556 [2024-06-11 15:16:49.214006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.556 [2024-06-11 15:16:49.222953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.556 [2024-06-11 15:16:49.223100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.556 [2024-06-11 15:16:49.223125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.556 [2024-06-11 15:16:49.232161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.232393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.232416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.240877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.241150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.241175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.250131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.250335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.250358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.259491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.259818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.259843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.268993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.269227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.269253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.278260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.278502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.278526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.288771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.288998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.289023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.299480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.299727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.299752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.310082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.310380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.310405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.319595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.319851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.319876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.329068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.329305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.329329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.338933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.339383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.339408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.348070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.348300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.348324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.359277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.359456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.359480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.368272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.368670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.368695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.378380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.378729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.378753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.557 [2024-06-11 15:16:49.387994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.557 [2024-06-11 15:16:49.388165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.557 [2024-06-11 15:16:49.388188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.817 [2024-06-11 15:16:49.398497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.817 [2024-06-11 15:16:49.398829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.817 [2024-06-11 15:16:49.398853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.817 [2024-06-11 15:16:49.409798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.817 [2024-06-11 15:16:49.410080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.817 [2024-06-11 15:16:49.410106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.817 [2024-06-11 15:16:49.421006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.817 [2024-06-11 15:16:49.421389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.817 [2024-06-11 15:16:49.421414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.817 [2024-06-11 15:16:49.428939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.817 [2024-06-11 15:16:49.429296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.817 [2024-06-11 15:16:49.429321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.817 [2024-06-11 15:16:49.439829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.817 [2024-06-11 15:16:49.440045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.817 [2024-06-11 15:16:49.440072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.817 [2024-06-11 15:16:49.449748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.817 [2024-06-11 15:16:49.450011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.817 [2024-06-11 15:16:49.450042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.817 [2024-06-11 15:16:49.458034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.817 [2024-06-11 15:16:49.458276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.817 [2024-06-11 15:16:49.458300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.817 [2024-06-11 15:16:49.467328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.817 [2024-06-11 15:16:49.467517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.817 [2024-06-11 15:16:49.467539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.817 [2024-06-11 15:16:49.477207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.817 [2024-06-11 15:16:49.477419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.818 [2024-06-11 15:16:49.477443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.818 [2024-06-11 15:16:49.487052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.818 [2024-06-11 15:16:49.487299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.818 [2024-06-11 15:16:49.487324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.818 [2024-06-11 15:16:49.496772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.818 [2024-06-11 15:16:49.497158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.818 [2024-06-11 15:16:49.497183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.818 [2024-06-11 15:16:49.506474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.818 [2024-06-11 15:16:49.506740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.818 [2024-06-11 15:16:49.506765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.818 [2024-06-11 15:16:49.516622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.818 [2024-06-11 15:16:49.516878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.818 [2024-06-11 15:16:49.516903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.818 [2024-06-11 15:16:49.525266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.818 [2024-06-11 15:16:49.525453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.818 [2024-06-11 15:16:49.525476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.818 [2024-06-11 15:16:49.534228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.818 [2024-06-11 15:16:49.534464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.818 [2024-06-11 15:16:49.534489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.818 [2024-06-11 15:16:49.545146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb49c70) with pdu=0x2000190fef90 00:31:30.818 [2024-06-11 15:16:49.545406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.818 [2024-06-11 15:16:49.545431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.818 00:31:30.818 Latency(us) 00:31:30.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.818 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:30.818 nvme0n1 : 2.01 3071.70 383.96 0.00 0.00 5198.77 3306.59 15371.17 00:31:30.818 =================================================================================================================== 00:31:30.818 Total : 3071.70 383.96 0.00 0.00 5198.77 3306.59 15371.17 00:31:30.818 0 00:31:30.818 15:16:49 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:30.818 15:16:49 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:30.818 15:16:49 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:30.818 | .driver_specific 00:31:30.818 | .nvme_error 00:31:30.818 | .status_code 00:31:30.818 | .command_transient_transport_error' 00:31:30.818 15:16:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:31.078 15:16:49 -- host/digest.sh@71 -- # (( 198 > 0 )) 00:31:31.078 15:16:49 -- host/digest.sh@73 -- # killprocess 3472895 00:31:31.078 15:16:49 -- common/autotest_common.sh@926 -- # '[' -z 3472895 ']' 00:31:31.078 15:16:49 -- common/autotest_common.sh@930 -- # kill -0 3472895 00:31:31.078 15:16:49 -- common/autotest_common.sh@931 -- # uname 00:31:31.078 15:16:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:31.078 15:16:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3472895 00:31:31.078 15:16:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:31.078 15:16:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:31.078 15:16:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3472895' 00:31:31.078 killing process with pid 3472895 00:31:31.078 15:16:49 -- common/autotest_common.sh@945 -- # kill 3472895 00:31:31.078 Received shutdown signal, test time was about 2.000000 seconds 00:31:31.078 00:31:31.078 Latency(us) 00:31:31.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.078 =================================================================================================================== 00:31:31.078 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:31.078 15:16:49 -- common/autotest_common.sh@950 -- # wait 3472895 00:31:31.337 15:16:50 -- host/digest.sh@115 -- # killprocess 3470404 00:31:31.337 15:16:50 -- common/autotest_common.sh@926 -- # '[' -z 3470404 ']' 00:31:31.337 15:16:50 -- common/autotest_common.sh@930 -- # kill -0 3470404 00:31:31.337 15:16:50 -- common/autotest_common.sh@931 -- # uname 00:31:31.337 15:16:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:31.337 15:16:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3470404 00:31:31.337 15:16:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:31.337 15:16:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:31.337 15:16:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3470404' 00:31:31.337 killing process with pid 3470404 00:31:31.337 15:16:50 -- common/autotest_common.sh@945 -- # kill 3470404 00:31:31.337 15:16:50 -- common/autotest_common.sh@950 -- # wait 3470404 00:31:31.597 00:31:31.597 real 0m18.232s 00:31:31.597 user 0m36.261s 00:31:31.597 sys 0m4.232s 00:31:31.597 15:16:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:31.597 15:16:50 -- common/autotest_common.sh@10 -- # set +x 00:31:31.597 ************************************ 00:31:31.597 END TEST nvmf_digest_error 00:31:31.597 ************************************ 00:31:31.597 15:16:50 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:31:31.597 15:16:50 -- host/digest.sh@139 -- # nvmftestfini 00:31:31.597 15:16:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:31.597 15:16:50 -- nvmf/common.sh@116 -- # sync 00:31:31.597 15:16:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:31.597 15:16:50 -- nvmf/common.sh@119 -- # set +e 00:31:31.597 15:16:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:31.597 15:16:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:31.597 rmmod nvme_tcp 00:31:31.597 rmmod nvme_fabrics 00:31:31.597 rmmod nvme_keyring 00:31:31.857 15:16:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:31.857 15:16:50 -- nvmf/common.sh@123 -- # set -e 00:31:31.857 15:16:50 -- nvmf/common.sh@124 -- # return 0 00:31:31.857 15:16:50 -- nvmf/common.sh@477 -- # '[' -n 3470404 ']' 00:31:31.857 15:16:50 -- nvmf/common.sh@478 -- # killprocess 3470404 00:31:31.857 15:16:50 -- common/autotest_common.sh@926 -- # '[' -z 3470404 ']' 00:31:31.857 15:16:50 -- common/autotest_common.sh@930 -- # kill -0 3470404 00:31:31.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3470404) - No such process 00:31:31.857 15:16:50 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3470404 is not found' 00:31:31.857 Process with pid 3470404 is not found 00:31:31.857 15:16:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:31.857 15:16:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:31.857 15:16:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:31.857 15:16:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:31.857 15:16:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:31.857 15:16:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.857 15:16:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:31.857 15:16:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.840 15:16:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:33.840 00:31:33.840 real 0m43.342s 00:31:33.840 user 1m9.825s 00:31:33.840 sys 0m13.329s 00:31:33.840 15:16:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:33.840 15:16:52 -- common/autotest_common.sh@10 -- # set +x 00:31:33.840 ************************************ 00:31:33.840 END TEST nvmf_digest 00:31:33.840 ************************************ 00:31:33.840 15:16:52 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:31:33.840 15:16:52 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:31:33.840 15:16:52 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:31:33.840 15:16:52 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:33.840 15:16:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:33.840 15:16:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:33.840 15:16:52 -- common/autotest_common.sh@10 -- # set +x 00:31:33.840 ************************************ 00:31:33.840 START TEST nvmf_bdevperf 00:31:33.840 ************************************ 00:31:33.840 15:16:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:33.840 * Looking for test storage... 00:31:33.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:33.840 15:16:52 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.840 15:16:52 -- nvmf/common.sh@7 -- # uname -s 00:31:33.840 15:16:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.840 15:16:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.840 15:16:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.840 15:16:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.840 15:16:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.840 15:16:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.840 15:16:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.840 15:16:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.840 15:16:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.840 15:16:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.840 15:16:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:33.840 15:16:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:33.840 15:16:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.840 15:16:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.840 15:16:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.840 15:16:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.840 15:16:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.840 15:16:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.840 15:16:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.841 15:16:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.841 15:16:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.841 15:16:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.841 15:16:52 -- paths/export.sh@5 -- # export PATH 00:31:33.841 15:16:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.841 15:16:52 -- nvmf/common.sh@46 -- # : 0 00:31:33.841 15:16:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:33.841 15:16:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:33.841 15:16:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:33.841 15:16:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.841 15:16:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.841 15:16:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:33.841 15:16:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:33.841 15:16:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:33.841 15:16:52 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:33.841 15:16:52 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:33.841 15:16:52 -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:33.841 15:16:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:33.841 15:16:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.841 15:16:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:33.841 15:16:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:33.841 15:16:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:33.841 15:16:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.841 15:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:33.841 15:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.841 15:16:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:33.841 15:16:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:33.841 15:16:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:33.841 15:16:52 -- common/autotest_common.sh@10 -- # set +x 00:31:40.429 15:16:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:40.429 15:16:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:40.429 15:16:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:40.429 15:16:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:40.429 15:16:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:40.429 15:16:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:40.429 15:16:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:40.429 15:16:58 -- nvmf/common.sh@294 -- # net_devs=() 00:31:40.429 15:16:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:40.429 15:16:58 -- nvmf/common.sh@295 -- # e810=() 00:31:40.429 15:16:58 -- nvmf/common.sh@295 -- # local -ga e810 00:31:40.429 15:16:58 -- nvmf/common.sh@296 -- # x722=() 00:31:40.429 15:16:58 -- nvmf/common.sh@296 -- # local -ga x722 00:31:40.429 15:16:58 -- nvmf/common.sh@297 -- # mlx=() 00:31:40.429 15:16:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:40.429 15:16:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.429 15:16:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:40.429 15:16:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:40.429 15:16:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:40.429 15:16:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:40.429 15:16:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:40.429 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:40.429 15:16:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:40.429 15:16:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:40.429 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:40.429 15:16:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:40.429 15:16:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:40.429 15:16:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.429 15:16:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:40.429 15:16:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.429 15:16:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:40.429 Found net devices under 0000:af:00.0: cvl_0_0 00:31:40.429 15:16:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.429 15:16:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:40.429 15:16:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.429 15:16:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:40.429 15:16:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.429 15:16:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:40.429 Found net devices under 0000:af:00.1: cvl_0_1 00:31:40.429 15:16:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.429 15:16:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:40.429 15:16:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:40.429 15:16:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:40.429 15:16:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:40.429 15:16:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.429 15:16:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.430 15:16:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.430 15:16:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:40.430 15:16:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.430 15:16:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.430 15:16:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:40.430 15:16:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.430 15:16:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.430 15:16:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:40.430 15:16:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:40.430 15:16:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.430 15:16:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.430 15:16:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.430 15:16:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.430 15:16:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:40.430 15:16:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.430 15:16:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.430 15:16:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.430 15:16:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:40.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:31:40.430 00:31:40.430 --- 10.0.0.2 ping statistics --- 00:31:40.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.430 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:31:40.430 15:16:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:31:40.430 00:31:40.430 --- 10.0.0.1 ping statistics --- 00:31:40.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.430 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:31:40.430 15:16:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.430 15:16:58 -- nvmf/common.sh@410 -- # return 0 00:31:40.430 15:16:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:40.430 15:16:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.430 15:16:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:40.430 15:16:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:40.430 15:16:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.430 15:16:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:40.430 15:16:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:40.430 15:16:58 -- host/bdevperf.sh@25 -- # tgt_init 00:31:40.430 15:16:58 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:40.430 15:16:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:40.430 15:16:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:40.430 15:16:58 -- common/autotest_common.sh@10 -- # set +x 00:31:40.430 15:16:58 -- nvmf/common.sh@469 -- # nvmfpid=3477650 00:31:40.430 15:16:58 -- nvmf/common.sh@470 -- # waitforlisten 3477650 00:31:40.430 15:16:58 -- common/autotest_common.sh@819 -- # '[' -z 3477650 ']' 00:31:40.430 15:16:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.430 15:16:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:40.430 15:16:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.430 15:16:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:40.430 15:16:58 -- common/autotest_common.sh@10 -- # set +x 00:31:40.430 15:16:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:40.430 [2024-06-11 15:16:58.822587] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:40.430 [2024-06-11 15:16:58.822641] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.430 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.430 [2024-06-11 15:16:58.909641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:40.430 [2024-06-11 15:16:59.000022] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:40.430 [2024-06-11 15:16:59.000171] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.430 [2024-06-11 15:16:59.000183] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.430 [2024-06-11 15:16:59.000192] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.430 [2024-06-11 15:16:59.000302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.430 [2024-06-11 15:16:59.000413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.430 [2024-06-11 15:16:59.000414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.998 15:16:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:40.998 15:16:59 -- common/autotest_common.sh@852 -- # return 0 00:31:40.998 15:16:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:40.998 15:16:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:40.998 15:16:59 -- common/autotest_common.sh@10 -- # set +x 00:31:40.998 15:16:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.998 15:16:59 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.998 15:16:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.998 15:16:59 -- common/autotest_common.sh@10 -- # set +x 00:31:40.998 [2024-06-11 15:16:59.799846] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.998 15:16:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.998 15:16:59 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:40.998 15:16:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.998 15:16:59 -- common/autotest_common.sh@10 -- # set +x 00:31:41.257 Malloc0 00:31:41.257 15:16:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.257 15:16:59 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:41.257 15:16:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.257 15:16:59 -- common/autotest_common.sh@10 -- # set +x 00:31:41.257 15:16:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.257 15:16:59 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:41.257 15:16:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.257 15:16:59 -- common/autotest_common.sh@10 -- # set +x 00:31:41.257 15:16:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.257 15:16:59 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.257 15:16:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.257 15:16:59 -- common/autotest_common.sh@10 -- # set +x 00:31:41.257 [2024-06-11 15:16:59.870515] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.257 15:16:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.257 15:16:59 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:41.257 15:16:59 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:41.257 15:16:59 -- nvmf/common.sh@520 -- # config=() 00:31:41.257 15:16:59 -- nvmf/common.sh@520 -- # local subsystem config 00:31:41.257 15:16:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:41.257 15:16:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:41.257 { 00:31:41.257 "params": { 00:31:41.257 "name": "Nvme$subsystem", 00:31:41.257 "trtype": "$TEST_TRANSPORT", 00:31:41.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.257 "adrfam": "ipv4", 00:31:41.257 "trsvcid": "$NVMF_PORT", 00:31:41.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.257 "hdgst": ${hdgst:-false}, 00:31:41.257 "ddgst": ${ddgst:-false} 00:31:41.257 }, 00:31:41.257 "method": "bdev_nvme_attach_controller" 00:31:41.257 } 00:31:41.257 EOF 00:31:41.257 )") 00:31:41.257 15:16:59 -- nvmf/common.sh@542 -- # cat 00:31:41.257 15:16:59 -- nvmf/common.sh@544 -- # jq . 00:31:41.257 15:16:59 -- nvmf/common.sh@545 -- # IFS=, 00:31:41.257 15:16:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:41.257 "params": { 00:31:41.257 "name": "Nvme1", 00:31:41.257 "trtype": "tcp", 00:31:41.257 "traddr": "10.0.0.2", 00:31:41.257 "adrfam": "ipv4", 00:31:41.257 "trsvcid": "4420", 00:31:41.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:41.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:41.257 "hdgst": false, 00:31:41.257 "ddgst": false 00:31:41.257 }, 00:31:41.257 "method": "bdev_nvme_attach_controller" 00:31:41.257 }' 00:31:41.257 [2024-06-11 15:16:59.920291] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:41.257 [2024-06-11 15:16:59.920349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477940 ] 00:31:41.257 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.257 [2024-06-11 15:17:00.010054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.516 [2024-06-11 15:17:00.100609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.775 Running I/O for 1 seconds... 00:31:42.710 00:31:42.710 Latency(us) 00:31:42.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.710 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:42.710 Verification LBA range: start 0x0 length 0x4000 00:31:42.710 Nvme1n1 : 1.01 11539.44 45.08 0.00 0.00 11039.84 1325.61 14000.87 00:31:42.710 =================================================================================================================== 00:31:42.710 Total : 11539.44 45.08 0.00 0.00 11039.84 1325.61 14000.87 00:31:42.969 15:17:01 -- host/bdevperf.sh@30 -- # bdevperfpid=3478209 00:31:42.969 15:17:01 -- host/bdevperf.sh@32 -- # sleep 3 00:31:42.969 15:17:01 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:42.969 15:17:01 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:42.969 15:17:01 -- nvmf/common.sh@520 -- # config=() 00:31:42.969 15:17:01 -- nvmf/common.sh@520 -- # local subsystem config 00:31:42.969 15:17:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:42.969 15:17:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:42.969 { 00:31:42.969 "params": { 00:31:42.969 "name": "Nvme$subsystem", 00:31:42.969 "trtype": "$TEST_TRANSPORT", 00:31:42.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.969 "adrfam": "ipv4", 00:31:42.969 "trsvcid": "$NVMF_PORT", 00:31:42.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.969 "hdgst": ${hdgst:-false}, 00:31:42.969 "ddgst": ${ddgst:-false} 00:31:42.969 }, 00:31:42.969 "method": "bdev_nvme_attach_controller" 00:31:42.969 } 00:31:42.969 EOF 00:31:42.969 )") 00:31:42.969 15:17:01 -- nvmf/common.sh@542 -- # cat 00:31:42.969 15:17:01 -- nvmf/common.sh@544 -- # jq . 00:31:42.969 15:17:01 -- nvmf/common.sh@545 -- # IFS=, 00:31:42.969 15:17:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:42.969 "params": { 00:31:42.969 "name": "Nvme1", 00:31:42.969 "trtype": "tcp", 00:31:42.969 "traddr": "10.0.0.2", 00:31:42.969 "adrfam": "ipv4", 00:31:42.969 "trsvcid": "4420", 00:31:42.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:42.969 "hdgst": false, 00:31:42.969 "ddgst": false 00:31:42.969 }, 00:31:42.969 "method": "bdev_nvme_attach_controller" 00:31:42.969 }' 00:31:42.969 [2024-06-11 15:17:01.700295] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:42.969 [2024-06-11 15:17:01.700359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478209 ] 00:31:42.969 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.969 [2024-06-11 15:17:01.791544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.228 [2024-06-11 15:17:01.872575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.228 Running I/O for 15 seconds... 00:31:46.524 15:17:04 -- host/bdevperf.sh@33 -- # kill -9 3477650 00:31:46.524 15:17:04 -- host/bdevperf.sh@35 -- # sleep 3 00:31:46.524 [2024-06-11 15:17:04.673409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.524 [2024-06-11 15:17:04.673458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.524 [2024-06-11 15:17:04.673486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.524 [2024-06-11 15:17:04.673498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.524 [2024-06-11 15:17:04.673513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.524 [2024-06-11 15:17:04.673525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.524 [2024-06-11 15:17:04.673538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.524 [2024-06-11 15:17:04.673549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.524 [2024-06-11 15:17:04.673561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.673978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.673988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.525 [2024-06-11 15:17:04.674256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.525 [2024-06-11 15:17:04.674278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.525 [2024-06-11 15:17:04.674301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.525 [2024-06-11 15:17:04.674323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.525 [2024-06-11 15:17:04.674347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.525 [2024-06-11 15:17:04.674368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.525 [2024-06-11 15:17:04.674380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.526 [2024-06-11 15:17:04.674742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.526 [2024-06-11 15:17:04.674895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.674982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.674994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.675003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.675016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.675137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.675151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.675161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.675173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.675183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.675195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.675205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.675218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.675227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.675239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.675249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.526 [2024-06-11 15:17:04.675261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.526 [2024-06-11 15:17:04.675270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.675953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.675989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.675999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.676011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.676020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.676038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.527 [2024-06-11 15:17:04.676048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.676060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.676071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.527 [2024-06-11 15:17:04.676082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.527 [2024-06-11 15:17:04.676092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.528 [2024-06-11 15:17:04.676271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.528 [2024-06-11 15:17:04.676449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116a0c0 is same with the state(5) to be set 00:31:46.528 [2024-06-11 15:17:04.676471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:46.528 [2024-06-11 15:17:04.676479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:46.528 [2024-06-11 15:17:04.676488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107760 len:8 PRP1 0x0 PRP2 0x0 00:31:46.528 [2024-06-11 15:17:04.676498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676551] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x116a0c0 was disconnected and freed. reset controller. 00:31:46.528 [2024-06-11 15:17:04.676610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.528 [2024-06-11 15:17:04.676623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.528 [2024-06-11 15:17:04.676645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.528 [2024-06-11 15:17:04.676666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.528 [2024-06-11 15:17:04.676686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.528 [2024-06-11 15:17:04.676695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.528 [2024-06-11 15:17:04.679277] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.528 [2024-06-11 15:17:04.679305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.528 [2024-06-11 15:17:04.680076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.528 [2024-06-11 15:17:04.680474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.528 [2024-06-11 15:17:04.680506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.528 [2024-06-11 15:17:04.680541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.528 [2024-06-11 15:17:04.680741] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.528 [2024-06-11 15:17:04.680924] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.528 [2024-06-11 15:17:04.680936] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.528 [2024-06-11 15:17:04.680947] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.528 [2024-06-11 15:17:04.683566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.528 [2024-06-11 15:17:04.692522] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.528 [2024-06-11 15:17:04.693045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.528 [2024-06-11 15:17:04.693469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.528 [2024-06-11 15:17:04.693509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.528 [2024-06-11 15:17:04.693522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.528 [2024-06-11 15:17:04.693720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.528 [2024-06-11 15:17:04.693874] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.528 [2024-06-11 15:17:04.693887] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.528 [2024-06-11 15:17:04.693898] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.528 [2024-06-11 15:17:04.696723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.528 [2024-06-11 15:17:04.705575] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.528 [2024-06-11 15:17:04.706152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.528 [2024-06-11 15:17:04.706514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.528 [2024-06-11 15:17:04.706545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.528 [2024-06-11 15:17:04.706568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.528 [2024-06-11 15:17:04.706949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.528 [2024-06-11 15:17:04.707261] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.528 [2024-06-11 15:17:04.707287] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.528 [2024-06-11 15:17:04.707310] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.528 [2024-06-11 15:17:04.710188] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.528 [2024-06-11 15:17:04.718318] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.529 [2024-06-11 15:17:04.718923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.719279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.719313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.529 [2024-06-11 15:17:04.719336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.529 [2024-06-11 15:17:04.719533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.529 [2024-06-11 15:17:04.719709] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.529 [2024-06-11 15:17:04.719726] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.529 [2024-06-11 15:17:04.719736] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.529 [2024-06-11 15:17:04.722309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.529 [2024-06-11 15:17:04.731222] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.529 [2024-06-11 15:17:04.731733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.732118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.732178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.529 [2024-06-11 15:17:04.732202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.529 [2024-06-11 15:17:04.732388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.529 [2024-06-11 15:17:04.732672] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.529 [2024-06-11 15:17:04.732708] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.529 [2024-06-11 15:17:04.732720] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.529 [2024-06-11 15:17:04.735610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.529 [2024-06-11 15:17:04.744109] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.529 [2024-06-11 15:17:04.744588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.744938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.744970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.529 [2024-06-11 15:17:04.744993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.529 [2024-06-11 15:17:04.745447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.529 [2024-06-11 15:17:04.745624] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.529 [2024-06-11 15:17:04.745637] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.529 [2024-06-11 15:17:04.745647] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.529 [2024-06-11 15:17:04.748489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.529 [2024-06-11 15:17:04.757077] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.529 [2024-06-11 15:17:04.757646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.758052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.758085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.529 [2024-06-11 15:17:04.758108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.529 [2024-06-11 15:17:04.758538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.529 [2024-06-11 15:17:04.758884] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.529 [2024-06-11 15:17:04.758897] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.529 [2024-06-11 15:17:04.758912] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.529 [2024-06-11 15:17:04.761395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.529 [2024-06-11 15:17:04.770100] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.529 [2024-06-11 15:17:04.770700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.771063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.771096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.529 [2024-06-11 15:17:04.771118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.529 [2024-06-11 15:17:04.771499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.529 [2024-06-11 15:17:04.771787] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.529 [2024-06-11 15:17:04.771800] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.529 [2024-06-11 15:17:04.771810] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.529 [2024-06-11 15:17:04.774357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.529 [2024-06-11 15:17:04.783091] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.529 [2024-06-11 15:17:04.783528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.783883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.529 [2024-06-11 15:17:04.783915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.529 [2024-06-11 15:17:04.783937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.529 [2024-06-11 15:17:04.784380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.529 [2024-06-11 15:17:04.784714] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.529 [2024-06-11 15:17:04.784740] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.529 [2024-06-11 15:17:04.784761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.529 [2024-06-11 15:17:04.787742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.530 [2024-06-11 15:17:04.796163] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.530 [2024-06-11 15:17:04.796753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.797091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.797124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.530 [2024-06-11 15:17:04.797147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.530 [2024-06-11 15:17:04.797529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.530 [2024-06-11 15:17:04.797943] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.530 [2024-06-11 15:17:04.797957] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.530 [2024-06-11 15:17:04.797966] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.530 [2024-06-11 15:17:04.800724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.530 [2024-06-11 15:17:04.809189] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.530 [2024-06-11 15:17:04.809768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.810133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.810166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.530 [2024-06-11 15:17:04.810188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.530 [2024-06-11 15:17:04.810518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.530 [2024-06-11 15:17:04.810854] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.530 [2024-06-11 15:17:04.810868] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.530 [2024-06-11 15:17:04.810878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.530 [2024-06-11 15:17:04.813633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.530 [2024-06-11 15:17:04.822334] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.530 [2024-06-11 15:17:04.822902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.823291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.823323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.530 [2024-06-11 15:17:04.823345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.530 [2024-06-11 15:17:04.823808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.530 [2024-06-11 15:17:04.823962] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.530 [2024-06-11 15:17:04.823975] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.530 [2024-06-11 15:17:04.823984] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.530 [2024-06-11 15:17:04.826582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.530 [2024-06-11 15:17:04.835739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.530 [2024-06-11 15:17:04.836349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.836742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.836772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.530 [2024-06-11 15:17:04.836795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.530 [2024-06-11 15:17:04.837043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.530 [2024-06-11 15:17:04.837221] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.530 [2024-06-11 15:17:04.837234] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.530 [2024-06-11 15:17:04.837245] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.530 [2024-06-11 15:17:04.839836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.530 [2024-06-11 15:17:04.848812] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.530 [2024-06-11 15:17:04.849418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.849745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.849775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.530 [2024-06-11 15:17:04.849797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.530 [2024-06-11 15:17:04.850058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.530 [2024-06-11 15:17:04.850235] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.530 [2024-06-11 15:17:04.850249] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.530 [2024-06-11 15:17:04.850259] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.530 [2024-06-11 15:17:04.852759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.530 [2024-06-11 15:17:04.861961] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.530 [2024-06-11 15:17:04.862552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.862863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.862895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.530 [2024-06-11 15:17:04.862918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.530 [2024-06-11 15:17:04.863417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.530 [2024-06-11 15:17:04.863853] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.530 [2024-06-11 15:17:04.863878] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.530 [2024-06-11 15:17:04.863900] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.530 [2024-06-11 15:17:04.866580] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.530 [2024-06-11 15:17:04.875109] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.530 [2024-06-11 15:17:04.875609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.875986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.876018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.530 [2024-06-11 15:17:04.876058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.530 [2024-06-11 15:17:04.876339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.530 [2024-06-11 15:17:04.876618] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.530 [2024-06-11 15:17:04.876637] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.530 [2024-06-11 15:17:04.876651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.530 [2024-06-11 15:17:04.880757] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.530 [2024-06-11 15:17:04.888307] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.530 [2024-06-11 15:17:04.888876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.889202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.889235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.530 [2024-06-11 15:17:04.889257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.530 [2024-06-11 15:17:04.889688] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.530 [2024-06-11 15:17:04.890005] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.530 [2024-06-11 15:17:04.890018] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.530 [2024-06-11 15:17:04.890034] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.530 [2024-06-11 15:17:04.892603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.530 [2024-06-11 15:17:04.901349] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.530 [2024-06-11 15:17:04.901833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.902243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.530 [2024-06-11 15:17:04.902275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.530 [2024-06-11 15:17:04.902297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.530 [2024-06-11 15:17:04.902726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.530 [2024-06-11 15:17:04.903089] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.530 [2024-06-11 15:17:04.903102] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.530 [2024-06-11 15:17:04.903113] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.530 [2024-06-11 15:17:04.905770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.530 [2024-06-11 15:17:04.914166] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.530 [2024-06-11 15:17:04.914736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.915145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.915178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.531 [2024-06-11 15:17:04.915200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.531 [2024-06-11 15:17:04.915731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.531 [2024-06-11 15:17:04.915963] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.531 [2024-06-11 15:17:04.915977] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.531 [2024-06-11 15:17:04.915987] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.531 [2024-06-11 15:17:04.918740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.531 [2024-06-11 15:17:04.927259] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.531 [2024-06-11 15:17:04.927683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.928050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.928082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.531 [2024-06-11 15:17:04.928105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.531 [2024-06-11 15:17:04.928585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.531 [2024-06-11 15:17:04.929015] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.531 [2024-06-11 15:17:04.929052] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.531 [2024-06-11 15:17:04.929074] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.531 [2024-06-11 15:17:04.931643] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.531 [2024-06-11 15:17:04.940436] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.531 [2024-06-11 15:17:04.940875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.941253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.941288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.531 [2024-06-11 15:17:04.941310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.531 [2024-06-11 15:17:04.941742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.531 [2024-06-11 15:17:04.942139] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.531 [2024-06-11 15:17:04.942154] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.531 [2024-06-11 15:17:04.942164] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.531 [2024-06-11 15:17:04.944984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.531 [2024-06-11 15:17:04.953283] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.531 [2024-06-11 15:17:04.953823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.954206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.954238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.531 [2024-06-11 15:17:04.954260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.531 [2024-06-11 15:17:04.954494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.531 [2024-06-11 15:17:04.954928] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.531 [2024-06-11 15:17:04.954941] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.531 [2024-06-11 15:17:04.954951] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.531 [2024-06-11 15:17:04.957612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.531 [2024-06-11 15:17:04.966357] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.531 [2024-06-11 15:17:04.966934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.967332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.967365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.531 [2024-06-11 15:17:04.967394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.531 [2024-06-11 15:17:04.967675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.531 [2024-06-11 15:17:04.968109] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.531 [2024-06-11 15:17:04.968128] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.531 [2024-06-11 15:17:04.968141] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.531 [2024-06-11 15:17:04.972606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.531 [2024-06-11 15:17:04.979951] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.531 [2024-06-11 15:17:04.980529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.980855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.980887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.531 [2024-06-11 15:17:04.980908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.531 [2024-06-11 15:17:04.981113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.531 [2024-06-11 15:17:04.981314] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.531 [2024-06-11 15:17:04.981325] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.531 [2024-06-11 15:17:04.981334] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.531 [2024-06-11 15:17:04.984149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.531 [2024-06-11 15:17:04.992686] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.531 [2024-06-11 15:17:04.993159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.993543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:04.993575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.531 [2024-06-11 15:17:04.993596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.531 [2024-06-11 15:17:04.993976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.531 [2024-06-11 15:17:04.994233] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.531 [2024-06-11 15:17:04.994247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.531 [2024-06-11 15:17:04.994257] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.531 [2024-06-11 15:17:04.997053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.531 [2024-06-11 15:17:05.005743] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.531 [2024-06-11 15:17:05.006346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:05.006727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:05.006758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.531 [2024-06-11 15:17:05.006779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.531 [2024-06-11 15:17:05.007199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.531 [2024-06-11 15:17:05.007374] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.531 [2024-06-11 15:17:05.007387] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.531 [2024-06-11 15:17:05.007397] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.531 [2024-06-11 15:17:05.010170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.531 [2024-06-11 15:17:05.018837] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.531 [2024-06-11 15:17:05.019288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:05.019617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:05.019648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.531 [2024-06-11 15:17:05.019669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.531 [2024-06-11 15:17:05.020000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.531 [2024-06-11 15:17:05.020404] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.531 [2024-06-11 15:17:05.020419] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.531 [2024-06-11 15:17:05.020428] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.531 [2024-06-11 15:17:05.023155] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.531 [2024-06-11 15:17:05.032030] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.531 [2024-06-11 15:17:05.032503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:05.032822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.531 [2024-06-11 15:17:05.032853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.531 [2024-06-11 15:17:05.032875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.532 [2024-06-11 15:17:05.033269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.532 [2024-06-11 15:17:05.033509] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.532 [2024-06-11 15:17:05.033522] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.532 [2024-06-11 15:17:05.033532] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.532 [2024-06-11 15:17:05.036375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.532 [2024-06-11 15:17:05.045170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.532 [2024-06-11 15:17:05.045768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.046059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.046092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.532 [2024-06-11 15:17:05.046113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.532 [2024-06-11 15:17:05.046353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.532 [2024-06-11 15:17:05.046547] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.532 [2024-06-11 15:17:05.046561] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.532 [2024-06-11 15:17:05.046570] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.532 [2024-06-11 15:17:05.049591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.532 [2024-06-11 15:17:05.058148] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.532 [2024-06-11 15:17:05.058713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.059094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.059127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.532 [2024-06-11 15:17:05.059149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.532 [2024-06-11 15:17:05.059488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.532 [2024-06-11 15:17:05.059778] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.532 [2024-06-11 15:17:05.059795] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.532 [2024-06-11 15:17:05.059809] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.532 [2024-06-11 15:17:05.063746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.532 [2024-06-11 15:17:05.071323] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.532 [2024-06-11 15:17:05.071936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.072268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.072300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.532 [2024-06-11 15:17:05.072323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.532 [2024-06-11 15:17:05.072604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.532 [2024-06-11 15:17:05.072810] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.532 [2024-06-11 15:17:05.072824] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.532 [2024-06-11 15:17:05.072834] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.532 [2024-06-11 15:17:05.075469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.532 [2024-06-11 15:17:05.084344] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.532 [2024-06-11 15:17:05.084847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.085201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.085218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.532 [2024-06-11 15:17:05.085229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.532 [2024-06-11 15:17:05.085403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.532 [2024-06-11 15:17:05.085561] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.532 [2024-06-11 15:17:05.085574] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.532 [2024-06-11 15:17:05.085584] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.532 [2024-06-11 15:17:05.088176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.532 [2024-06-11 15:17:05.097346] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.532 [2024-06-11 15:17:05.097866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.098131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.098148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.532 [2024-06-11 15:17:05.098158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.532 [2024-06-11 15:17:05.098289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.532 [2024-06-11 15:17:05.098419] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.532 [2024-06-11 15:17:05.098432] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.532 [2024-06-11 15:17:05.098441] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.532 [2024-06-11 15:17:05.101125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.532 [2024-06-11 15:17:05.110060] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.532 [2024-06-11 15:17:05.110603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.110996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.111042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.532 [2024-06-11 15:17:05.111067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.532 [2024-06-11 15:17:05.111495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.532 [2024-06-11 15:17:05.111737] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.532 [2024-06-11 15:17:05.111751] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.532 [2024-06-11 15:17:05.111761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.532 [2024-06-11 15:17:05.114490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.532 [2024-06-11 15:17:05.123227] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.532 [2024-06-11 15:17:05.123783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.124110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.124142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.532 [2024-06-11 15:17:05.124164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.532 [2024-06-11 15:17:05.124455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.532 [2024-06-11 15:17:05.124677] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.532 [2024-06-11 15:17:05.124690] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.532 [2024-06-11 15:17:05.124704] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.532 [2024-06-11 15:17:05.127430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.532 [2024-06-11 15:17:05.136312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.532 [2024-06-11 15:17:05.136874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.137223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.137256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.532 [2024-06-11 15:17:05.137278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.532 [2024-06-11 15:17:05.137480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.532 [2024-06-11 15:17:05.137701] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.532 [2024-06-11 15:17:05.137714] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.532 [2024-06-11 15:17:05.137724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.532 [2024-06-11 15:17:05.140429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.532 [2024-06-11 15:17:05.149376] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.532 [2024-06-11 15:17:05.149951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.150285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.532 [2024-06-11 15:17:05.150318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.532 [2024-06-11 15:17:05.150340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.532 [2024-06-11 15:17:05.150627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.532 [2024-06-11 15:17:05.150883] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.532 [2024-06-11 15:17:05.150902] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.533 [2024-06-11 15:17:05.150916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.533 [2024-06-11 15:17:05.155125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.533 [2024-06-11 15:17:05.162753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.533 [2024-06-11 15:17:05.163319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.163688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.163719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.533 [2024-06-11 15:17:05.163741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.533 [2024-06-11 15:17:05.164075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.533 [2024-06-11 15:17:05.164229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.533 [2024-06-11 15:17:05.164242] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.533 [2024-06-11 15:17:05.164256] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.533 [2024-06-11 15:17:05.167073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.533 [2024-06-11 15:17:05.175902] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.533 [2024-06-11 15:17:05.176415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.176846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.176877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.533 [2024-06-11 15:17:05.176899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.533 [2024-06-11 15:17:05.177344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.533 [2024-06-11 15:17:05.177602] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.533 [2024-06-11 15:17:05.177615] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.533 [2024-06-11 15:17:05.177625] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.533 [2024-06-11 15:17:05.180494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.533 [2024-06-11 15:17:05.188929] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.533 [2024-06-11 15:17:05.189469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.189819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.189850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.533 [2024-06-11 15:17:05.189872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.533 [2024-06-11 15:17:05.190069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.533 [2024-06-11 15:17:05.190247] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.533 [2024-06-11 15:17:05.190260] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.533 [2024-06-11 15:17:05.190270] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.533 [2024-06-11 15:17:05.193203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.533 [2024-06-11 15:17:05.201880] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.533 [2024-06-11 15:17:05.202407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.202709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.202740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.533 [2024-06-11 15:17:05.202762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.533 [2024-06-11 15:17:05.203002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.533 [2024-06-11 15:17:05.203185] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.533 [2024-06-11 15:17:05.203199] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.533 [2024-06-11 15:17:05.203209] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.533 [2024-06-11 15:17:05.205937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.533 [2024-06-11 15:17:05.215033] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.533 [2024-06-11 15:17:05.215576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.215888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.215918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.533 [2024-06-11 15:17:05.215941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.533 [2024-06-11 15:17:05.216339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.533 [2024-06-11 15:17:05.216724] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.533 [2024-06-11 15:17:05.216748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.533 [2024-06-11 15:17:05.216769] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.533 [2024-06-11 15:17:05.219771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.533 [2024-06-11 15:17:05.228132] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.533 [2024-06-11 15:17:05.228635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.228997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.229040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.533 [2024-06-11 15:17:05.229064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.533 [2024-06-11 15:17:05.229494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.533 [2024-06-11 15:17:05.229728] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.533 [2024-06-11 15:17:05.229741] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.533 [2024-06-11 15:17:05.229751] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.533 [2024-06-11 15:17:05.232437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.533 [2024-06-11 15:17:05.241153] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.533 [2024-06-11 15:17:05.241660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.241980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.242009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.533 [2024-06-11 15:17:05.242044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.533 [2024-06-11 15:17:05.242576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.533 [2024-06-11 15:17:05.242840] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.533 [2024-06-11 15:17:05.242853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.533 [2024-06-11 15:17:05.242863] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.533 [2024-06-11 15:17:05.245569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.533 [2024-06-11 15:17:05.254222] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.533 [2024-06-11 15:17:05.254747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.255147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.255180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.533 [2024-06-11 15:17:05.255203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.533 [2024-06-11 15:17:05.255534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.533 [2024-06-11 15:17:05.255869] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.533 [2024-06-11 15:17:05.255894] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.533 [2024-06-11 15:17:05.255918] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.533 [2024-06-11 15:17:05.258648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.533 [2024-06-11 15:17:05.267303] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.533 [2024-06-11 15:17:05.267784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.268106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.533 [2024-06-11 15:17:05.268139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.533 [2024-06-11 15:17:05.268162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.533 [2024-06-11 15:17:05.268591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.533 [2024-06-11 15:17:05.268824] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.533 [2024-06-11 15:17:05.268849] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.533 [2024-06-11 15:17:05.268881] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.533 [2024-06-11 15:17:05.271610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.533 [2024-06-11 15:17:05.280463] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.533 [2024-06-11 15:17:05.281013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.281415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.281447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.534 [2024-06-11 15:17:05.281469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.534 [2024-06-11 15:17:05.281849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.534 [2024-06-11 15:17:05.282211] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.534 [2024-06-11 15:17:05.282230] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.534 [2024-06-11 15:17:05.282245] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.534 [2024-06-11 15:17:05.286239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.534 [2024-06-11 15:17:05.293993] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.534 [2024-06-11 15:17:05.294545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.294938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.294969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.534 [2024-06-11 15:17:05.294991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.534 [2024-06-11 15:17:05.295169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.534 [2024-06-11 15:17:05.295389] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.534 [2024-06-11 15:17:05.295402] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.534 [2024-06-11 15:17:05.295411] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.534 [2024-06-11 15:17:05.298023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.534 [2024-06-11 15:17:05.306954] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.534 [2024-06-11 15:17:05.307388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.307738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.307769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.534 [2024-06-11 15:17:05.307790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.534 [2024-06-11 15:17:05.308131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.534 [2024-06-11 15:17:05.308307] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.534 [2024-06-11 15:17:05.308320] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.534 [2024-06-11 15:17:05.308329] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.534 [2024-06-11 15:17:05.310807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.534 [2024-06-11 15:17:05.320209] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.534 [2024-06-11 15:17:05.320730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.321135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.321175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.534 [2024-06-11 15:17:05.321186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.534 [2024-06-11 15:17:05.321317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.534 [2024-06-11 15:17:05.321469] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.534 [2024-06-11 15:17:05.321483] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.534 [2024-06-11 15:17:05.321493] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.534 [2024-06-11 15:17:05.324061] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.534 [2024-06-11 15:17:05.332982] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.534 [2024-06-11 15:17:05.333561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.333856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.333887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.534 [2024-06-11 15:17:05.333916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.534 [2024-06-11 15:17:05.334162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.534 [2024-06-11 15:17:05.334434] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.534 [2024-06-11 15:17:05.334448] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.534 [2024-06-11 15:17:05.334457] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.534 [2024-06-11 15:17:05.337184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.534 [2024-06-11 15:17:05.346016] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.534 [2024-06-11 15:17:05.346572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.346921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.534 [2024-06-11 15:17:05.346953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.534 [2024-06-11 15:17:05.346974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.534 [2024-06-11 15:17:05.347284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.534 [2024-06-11 15:17:05.347415] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.534 [2024-06-11 15:17:05.347428] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.534 [2024-06-11 15:17:05.347438] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.534 [2024-06-11 15:17:05.350235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.534 [2024-06-11 15:17:05.359131] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.534 [2024-06-11 15:17:05.359657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.360075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.360108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.795 [2024-06-11 15:17:05.360130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.795 [2024-06-11 15:17:05.360452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.795 [2024-06-11 15:17:05.360606] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.795 [2024-06-11 15:17:05.360619] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.795 [2024-06-11 15:17:05.360628] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.795 [2024-06-11 15:17:05.363176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.795 [2024-06-11 15:17:05.371986] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.795 [2024-06-11 15:17:05.372449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.372827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.372858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.795 [2024-06-11 15:17:05.372886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.795 [2024-06-11 15:17:05.373111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.795 [2024-06-11 15:17:05.373266] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.795 [2024-06-11 15:17:05.373284] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.795 [2024-06-11 15:17:05.373298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.795 [2024-06-11 15:17:05.376998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.795 [2024-06-11 15:17:05.385462] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.795 [2024-06-11 15:17:05.385991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.386415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.386447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.795 [2024-06-11 15:17:05.386480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.795 [2024-06-11 15:17:05.386655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.795 [2024-06-11 15:17:05.386808] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.795 [2024-06-11 15:17:05.386821] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.795 [2024-06-11 15:17:05.386831] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.795 [2024-06-11 15:17:05.389605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.795 [2024-06-11 15:17:05.398569] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.795 [2024-06-11 15:17:05.399115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.399496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.399527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.795 [2024-06-11 15:17:05.399549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.795 [2024-06-11 15:17:05.399880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.795 [2024-06-11 15:17:05.400274] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.795 [2024-06-11 15:17:05.400313] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.795 [2024-06-11 15:17:05.400324] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.795 [2024-06-11 15:17:05.403004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.795 [2024-06-11 15:17:05.411596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.795 [2024-06-11 15:17:05.412073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.412416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.412448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.795 [2024-06-11 15:17:05.412469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.795 [2024-06-11 15:17:05.412806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.795 [2024-06-11 15:17:05.413217] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.795 [2024-06-11 15:17:05.413232] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.795 [2024-06-11 15:17:05.413241] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.795 [2024-06-11 15:17:05.415986] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.795 [2024-06-11 15:17:05.424640] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.795 [2024-06-11 15:17:05.425203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.425467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.425498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.795 [2024-06-11 15:17:05.425520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.795 [2024-06-11 15:17:05.425852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.795 [2024-06-11 15:17:05.426202] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.795 [2024-06-11 15:17:05.426216] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.795 [2024-06-11 15:17:05.426227] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.795 [2024-06-11 15:17:05.428863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.795 [2024-06-11 15:17:05.437777] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.795 [2024-06-11 15:17:05.438293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.438629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.795 [2024-06-11 15:17:05.438660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.795 [2024-06-11 15:17:05.438682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.795 [2024-06-11 15:17:05.439012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.795 [2024-06-11 15:17:05.439359] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.796 [2024-06-11 15:17:05.439373] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.796 [2024-06-11 15:17:05.439383] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.796 [2024-06-11 15:17:05.442295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.796 [2024-06-11 15:17:05.450661] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.796 [2024-06-11 15:17:05.451138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.451961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.451986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.796 [2024-06-11 15:17:05.451998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.796 [2024-06-11 15:17:05.452163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.796 [2024-06-11 15:17:05.452321] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.796 [2024-06-11 15:17:05.452334] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.796 [2024-06-11 15:17:05.452345] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.796 [2024-06-11 15:17:05.455216] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.796 [2024-06-11 15:17:05.463627] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.796 [2024-06-11 15:17:05.464117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.464480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.464511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.796 [2024-06-11 15:17:05.464533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.796 [2024-06-11 15:17:05.464785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.796 [2024-06-11 15:17:05.464983] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.796 [2024-06-11 15:17:05.464997] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.796 [2024-06-11 15:17:05.465006] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.796 [2024-06-11 15:17:05.467694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.796 [2024-06-11 15:17:05.476744] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.796 [2024-06-11 15:17:05.477256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.477590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.477621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.796 [2024-06-11 15:17:05.477643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.796 [2024-06-11 15:17:05.478036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.796 [2024-06-11 15:17:05.478410] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.796 [2024-06-11 15:17:05.478423] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.796 [2024-06-11 15:17:05.478434] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.796 [2024-06-11 15:17:05.481145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.796 [2024-06-11 15:17:05.489780] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.796 [2024-06-11 15:17:05.490328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.490627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.490658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.796 [2024-06-11 15:17:05.490680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.796 [2024-06-11 15:17:05.491070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.796 [2024-06-11 15:17:05.491360] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.796 [2024-06-11 15:17:05.491378] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.796 [2024-06-11 15:17:05.491388] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.796 [2024-06-11 15:17:05.494076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.796 [2024-06-11 15:17:05.502456] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.796 [2024-06-11 15:17:05.502908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.503160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.503192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.796 [2024-06-11 15:17:05.503215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.796 [2024-06-11 15:17:05.503539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.796 [2024-06-11 15:17:05.503692] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.796 [2024-06-11 15:17:05.503704] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.796 [2024-06-11 15:17:05.503714] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.796 [2024-06-11 15:17:05.506655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.796 [2024-06-11 15:17:05.515378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.796 [2024-06-11 15:17:05.515864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.516179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.516215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.796 [2024-06-11 15:17:05.516238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.796 [2024-06-11 15:17:05.516621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.796 [2024-06-11 15:17:05.516820] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.796 [2024-06-11 15:17:05.516833] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.796 [2024-06-11 15:17:05.516843] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.796 [2024-06-11 15:17:05.519664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.796 [2024-06-11 15:17:05.528502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.796 [2024-06-11 15:17:05.528977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.529254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.529270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.796 [2024-06-11 15:17:05.529281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.796 [2024-06-11 15:17:05.529410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.796 [2024-06-11 15:17:05.529563] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.796 [2024-06-11 15:17:05.529575] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.796 [2024-06-11 15:17:05.529588] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.796 [2024-06-11 15:17:05.532453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.796 [2024-06-11 15:17:05.541232] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.796 [2024-06-11 15:17:05.541631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.541919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.541950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.796 [2024-06-11 15:17:05.541973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.796 [2024-06-11 15:17:05.542318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.796 [2024-06-11 15:17:05.542641] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.796 [2024-06-11 15:17:05.542655] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.796 [2024-06-11 15:17:05.542664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.796 [2024-06-11 15:17:05.545262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.796 [2024-06-11 15:17:05.554017] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.796 [2024-06-11 15:17:05.554559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.554819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.796 [2024-06-11 15:17:05.554850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.796 [2024-06-11 15:17:05.554872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.796 [2024-06-11 15:17:05.555315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.796 [2024-06-11 15:17:05.555640] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.796 [2024-06-11 15:17:05.555652] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.796 [2024-06-11 15:17:05.555662] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.796 [2024-06-11 15:17:05.559124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.796 [2024-06-11 15:17:05.567586] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.797 [2024-06-11 15:17:05.567965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.568310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.568326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.797 [2024-06-11 15:17:05.568336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.797 [2024-06-11 15:17:05.568512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.797 [2024-06-11 15:17:05.568620] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.797 [2024-06-11 15:17:05.568633] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.797 [2024-06-11 15:17:05.568642] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.797 [2024-06-11 15:17:05.571380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.797 [2024-06-11 15:17:05.580720] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.797 [2024-06-11 15:17:05.581282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.581636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.581667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.797 [2024-06-11 15:17:05.581689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.797 [2024-06-11 15:17:05.582003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.797 [2024-06-11 15:17:05.582139] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.797 [2024-06-11 15:17:05.582152] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.797 [2024-06-11 15:17:05.582162] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.797 [2024-06-11 15:17:05.584731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.797 [2024-06-11 15:17:05.593877] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.797 [2024-06-11 15:17:05.594467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.594822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.594853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.797 [2024-06-11 15:17:05.594875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.797 [2024-06-11 15:17:05.595268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.797 [2024-06-11 15:17:05.595701] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.797 [2024-06-11 15:17:05.595726] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.797 [2024-06-11 15:17:05.595747] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.797 [2024-06-11 15:17:05.598419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.797 [2024-06-11 15:17:05.606943] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.797 [2024-06-11 15:17:05.607404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.607603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.607619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.797 [2024-06-11 15:17:05.607630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.797 [2024-06-11 15:17:05.607873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.797 [2024-06-11 15:17:05.608079] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.797 [2024-06-11 15:17:05.608092] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.797 [2024-06-11 15:17:05.608102] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.797 [2024-06-11 15:17:05.610827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.797 [2024-06-11 15:17:05.620065] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.797 [2024-06-11 15:17:05.620492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.620818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.620850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.797 [2024-06-11 15:17:05.620871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.797 [2024-06-11 15:17:05.621366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.797 [2024-06-11 15:17:05.621684] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.797 [2024-06-11 15:17:05.621698] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.797 [2024-06-11 15:17:05.621707] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.797 [2024-06-11 15:17:05.624666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.797 [2024-06-11 15:17:05.632939] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.797 [2024-06-11 15:17:05.633333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.633652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.797 [2024-06-11 15:17:05.633682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:46.797 [2024-06-11 15:17:05.633704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:46.797 [2024-06-11 15:17:05.634066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:46.797 [2024-06-11 15:17:05.634288] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.797 [2024-06-11 15:17:05.634301] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.797 [2024-06-11 15:17:05.634311] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.058 [2024-06-11 15:17:05.637226] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.058 [2024-06-11 15:17:05.645925] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.058 [2024-06-11 15:17:05.646333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.646609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.646640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.058 [2024-06-11 15:17:05.646662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.058 [2024-06-11 15:17:05.646993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.058 [2024-06-11 15:17:05.647284] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.058 [2024-06-11 15:17:05.647297] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.058 [2024-06-11 15:17:05.647307] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.058 [2024-06-11 15:17:05.649918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.058 [2024-06-11 15:17:05.658975] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.058 [2024-06-11 15:17:05.659306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.659594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.659626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.058 [2024-06-11 15:17:05.659648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.058 [2024-06-11 15:17:05.659977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.058 [2024-06-11 15:17:05.660372] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.058 [2024-06-11 15:17:05.660398] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.058 [2024-06-11 15:17:05.660420] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.058 [2024-06-11 15:17:05.663350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.058 [2024-06-11 15:17:05.672077] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.058 [2024-06-11 15:17:05.672474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.672772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.672803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.058 [2024-06-11 15:17:05.672825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.058 [2024-06-11 15:17:05.673218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.058 [2024-06-11 15:17:05.673552] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.058 [2024-06-11 15:17:05.673577] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.058 [2024-06-11 15:17:05.673597] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.058 [2024-06-11 15:17:05.676389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.058 [2024-06-11 15:17:05.685296] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.058 [2024-06-11 15:17:05.685672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.685894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.685909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.058 [2024-06-11 15:17:05.685920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.058 [2024-06-11 15:17:05.686056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.058 [2024-06-11 15:17:05.686165] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.058 [2024-06-11 15:17:05.686178] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.058 [2024-06-11 15:17:05.686188] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.058 [2024-06-11 15:17:05.688849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.058 [2024-06-11 15:17:05.698356] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.058 [2024-06-11 15:17:05.698783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.699129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.699150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.058 [2024-06-11 15:17:05.699161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.058 [2024-06-11 15:17:05.699314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.058 [2024-06-11 15:17:05.699488] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.058 [2024-06-11 15:17:05.699501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.058 [2024-06-11 15:17:05.699511] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.058 [2024-06-11 15:17:05.702304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.058 [2024-06-11 15:17:05.711207] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.058 [2024-06-11 15:17:05.711752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.712045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.712062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.058 [2024-06-11 15:17:05.712072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.058 [2024-06-11 15:17:05.712270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.058 [2024-06-11 15:17:05.712423] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.058 [2024-06-11 15:17:05.712436] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.058 [2024-06-11 15:17:05.712446] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.058 [2024-06-11 15:17:05.715269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.058 [2024-06-11 15:17:05.724136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.058 [2024-06-11 15:17:05.724640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.724907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.724922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.058 [2024-06-11 15:17:05.724932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.058 [2024-06-11 15:17:05.725092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.058 [2024-06-11 15:17:05.725245] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.058 [2024-06-11 15:17:05.725257] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.058 [2024-06-11 15:17:05.725267] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.058 [2024-06-11 15:17:05.727901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.058 [2024-06-11 15:17:05.737016] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.058 [2024-06-11 15:17:05.737621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.737966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.058 [2024-06-11 15:17:05.737982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.058 [2024-06-11 15:17:05.737997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.058 [2024-06-11 15:17:05.738156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.059 [2024-06-11 15:17:05.738287] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.059 [2024-06-11 15:17:05.738300] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.059 [2024-06-11 15:17:05.738309] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.059 [2024-06-11 15:17:05.740852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.059 [2024-06-11 15:17:05.749822] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.059 [2024-06-11 15:17:05.750288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.750631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.750647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.059 [2024-06-11 15:17:05.750657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.059 [2024-06-11 15:17:05.750832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.059 [2024-06-11 15:17:05.751061] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.059 [2024-06-11 15:17:05.751075] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.059 [2024-06-11 15:17:05.751085] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.059 [2024-06-11 15:17:05.753786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.059 [2024-06-11 15:17:05.762887] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.059 [2024-06-11 15:17:05.763454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.763768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.763784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.059 [2024-06-11 15:17:05.763794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.059 [2024-06-11 15:17:05.763991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.059 [2024-06-11 15:17:05.764150] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.059 [2024-06-11 15:17:05.764164] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.059 [2024-06-11 15:17:05.764173] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.059 [2024-06-11 15:17:05.766948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.059 [2024-06-11 15:17:05.775934] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.059 [2024-06-11 15:17:05.776484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.776785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.776801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.059 [2024-06-11 15:17:05.776811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.059 [2024-06-11 15:17:05.777041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.059 [2024-06-11 15:17:05.777172] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.059 [2024-06-11 15:17:05.777185] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.059 [2024-06-11 15:17:05.777194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.059 [2024-06-11 15:17:05.779788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.059 [2024-06-11 15:17:05.788923] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.059 [2024-06-11 15:17:05.789480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.789766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.789782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.059 [2024-06-11 15:17:05.789792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.059 [2024-06-11 15:17:05.789921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.059 [2024-06-11 15:17:05.790104] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.059 [2024-06-11 15:17:05.790117] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.059 [2024-06-11 15:17:05.790127] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.059 [2024-06-11 15:17:05.792628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.059 [2024-06-11 15:17:05.801955] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.059 [2024-06-11 15:17:05.802486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.802759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.802774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.059 [2024-06-11 15:17:05.802785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.059 [2024-06-11 15:17:05.803005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.059 [2024-06-11 15:17:05.803210] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.059 [2024-06-11 15:17:05.803224] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.059 [2024-06-11 15:17:05.803233] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.059 [2024-06-11 15:17:05.806030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.059 [2024-06-11 15:17:05.814987] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.059 [2024-06-11 15:17:05.815372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.815663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.815679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.059 [2024-06-11 15:17:05.815690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.059 [2024-06-11 15:17:05.815888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.059 [2024-06-11 15:17:05.816022] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.059 [2024-06-11 15:17:05.816043] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.059 [2024-06-11 15:17:05.816053] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.059 [2024-06-11 15:17:05.818620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.059 [2024-06-11 15:17:05.827878] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.059 [2024-06-11 15:17:05.828234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.828464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.828480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.059 [2024-06-11 15:17:05.828490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.059 [2024-06-11 15:17:05.828666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.059 [2024-06-11 15:17:05.828865] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.059 [2024-06-11 15:17:05.828878] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.059 [2024-06-11 15:17:05.828888] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.059 [2024-06-11 15:17:05.831508] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.059 [2024-06-11 15:17:05.840970] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.059 [2024-06-11 15:17:05.841544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.841809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.841825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.059 [2024-06-11 15:17:05.841836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.059 [2024-06-11 15:17:05.842012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.059 [2024-06-11 15:17:05.842194] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.059 [2024-06-11 15:17:05.842208] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.059 [2024-06-11 15:17:05.842218] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.059 [2024-06-11 15:17:05.845127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.059 [2024-06-11 15:17:05.853869] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.059 [2024-06-11 15:17:05.854358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.854709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.059 [2024-06-11 15:17:05.854724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.059 [2024-06-11 15:17:05.854735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.059 [2024-06-11 15:17:05.854955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.059 [2024-06-11 15:17:05.855161] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.059 [2024-06-11 15:17:05.855184] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.059 [2024-06-11 15:17:05.855194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.059 [2024-06-11 15:17:05.857832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.060 [2024-06-11 15:17:05.866904] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.060 [2024-06-11 15:17:05.867513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.060 [2024-06-11 15:17:05.867709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.060 [2024-06-11 15:17:05.867725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.060 [2024-06-11 15:17:05.867735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.060 [2024-06-11 15:17:05.867911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.060 [2024-06-11 15:17:05.868093] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.060 [2024-06-11 15:17:05.868107] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.060 [2024-06-11 15:17:05.868118] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.060 [2024-06-11 15:17:05.870890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.060 [2024-06-11 15:17:05.879996] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.060 [2024-06-11 15:17:05.880512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.060 [2024-06-11 15:17:05.880822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.060 [2024-06-11 15:17:05.880853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.060 [2024-06-11 15:17:05.880876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.060 [2024-06-11 15:17:05.881117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.060 [2024-06-11 15:17:05.881404] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.060 [2024-06-11 15:17:05.881417] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.060 [2024-06-11 15:17:05.881427] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.060 [2024-06-11 15:17:05.885050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.060 [2024-06-11 15:17:05.893517] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.060 [2024-06-11 15:17:05.893940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.060 [2024-06-11 15:17:05.894285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.060 [2024-06-11 15:17:05.894302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.060 [2024-06-11 15:17:05.894313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.060 [2024-06-11 15:17:05.894533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.060 [2024-06-11 15:17:05.894708] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.060 [2024-06-11 15:17:05.894721] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.060 [2024-06-11 15:17:05.894736] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.060 [2024-06-11 15:17:05.897355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.320 [2024-06-11 15:17:05.906525] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.320 [2024-06-11 15:17:05.906972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.320 [2024-06-11 15:17:05.907322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.320 [2024-06-11 15:17:05.907354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.320 [2024-06-11 15:17:05.907376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.320 [2024-06-11 15:17:05.907756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.320 [2024-06-11 15:17:05.908113] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.320 [2024-06-11 15:17:05.908140] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.321 [2024-06-11 15:17:05.908160] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.321 [2024-06-11 15:17:05.911033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.321 [2024-06-11 15:17:05.919618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.321 [2024-06-11 15:17:05.920114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.920342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.920373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.321 [2024-06-11 15:17:05.920396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.321 [2024-06-11 15:17:05.920778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.321 [2024-06-11 15:17:05.921270] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.321 [2024-06-11 15:17:05.921297] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.321 [2024-06-11 15:17:05.921317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.321 [2024-06-11 15:17:05.925779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.321 [2024-06-11 15:17:05.933419] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.321 [2024-06-11 15:17:05.933842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.934146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.934179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.321 [2024-06-11 15:17:05.934201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.321 [2024-06-11 15:17:05.934521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.321 [2024-06-11 15:17:05.934674] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.321 [2024-06-11 15:17:05.934688] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.321 [2024-06-11 15:17:05.934698] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.321 [2024-06-11 15:17:05.937660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.321 [2024-06-11 15:17:05.946502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.321 [2024-06-11 15:17:05.947019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.947320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.947352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.321 [2024-06-11 15:17:05.947374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.321 [2024-06-11 15:17:05.947655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.321 [2024-06-11 15:17:05.947886] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.321 [2024-06-11 15:17:05.947899] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.321 [2024-06-11 15:17:05.947909] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.321 [2024-06-11 15:17:05.950524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.321 [2024-06-11 15:17:05.959345] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.321 [2024-06-11 15:17:05.959788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.960460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.960482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.321 [2024-06-11 15:17:05.960493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.321 [2024-06-11 15:17:05.960652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.321 [2024-06-11 15:17:05.960827] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.321 [2024-06-11 15:17:05.960839] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.321 [2024-06-11 15:17:05.960849] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.321 [2024-06-11 15:17:05.963677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.321 [2024-06-11 15:17:05.972063] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.321 [2024-06-11 15:17:05.972662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.972962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.972994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.321 [2024-06-11 15:17:05.973017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.321 [2024-06-11 15:17:05.973313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.321 [2024-06-11 15:17:05.973646] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.321 [2024-06-11 15:17:05.973676] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.321 [2024-06-11 15:17:05.973685] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.321 [2024-06-11 15:17:05.976371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.321 [2024-06-11 15:17:05.985293] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.321 [2024-06-11 15:17:05.985714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.985959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.985990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.321 [2024-06-11 15:17:05.986012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.321 [2024-06-11 15:17:05.986457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.321 [2024-06-11 15:17:05.986850] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.321 [2024-06-11 15:17:05.986863] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.321 [2024-06-11 15:17:05.986873] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.321 [2024-06-11 15:17:05.989651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.321 [2024-06-11 15:17:05.998066] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.321 [2024-06-11 15:17:05.998672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.998967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:05.998999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.321 [2024-06-11 15:17:05.999021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.321 [2024-06-11 15:17:05.999416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.321 [2024-06-11 15:17:05.999750] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.321 [2024-06-11 15:17:05.999783] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.321 [2024-06-11 15:17:05.999793] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.321 [2024-06-11 15:17:06.002636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.321 [2024-06-11 15:17:06.011229] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.321 [2024-06-11 15:17:06.011788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:06.012073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:06.012106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.321 [2024-06-11 15:17:06.012128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.321 [2024-06-11 15:17:06.012411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.321 [2024-06-11 15:17:06.012743] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.321 [2024-06-11 15:17:06.012768] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.321 [2024-06-11 15:17:06.012789] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.321 [2024-06-11 15:17:06.016483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.321 [2024-06-11 15:17:06.025019] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.321 [2024-06-11 15:17:06.025504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:06.025801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.321 [2024-06-11 15:17:06.025833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.321 [2024-06-11 15:17:06.025855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.321 [2024-06-11 15:17:06.026164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.321 [2024-06-11 15:17:06.026318] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.321 [2024-06-11 15:17:06.026331] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.321 [2024-06-11 15:17:06.026340] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.321 [2024-06-11 15:17:06.029163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.321 [2024-06-11 15:17:06.038092] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.322 [2024-06-11 15:17:06.038562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.038953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.038984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.322 [2024-06-11 15:17:06.039005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.322 [2024-06-11 15:17:06.039401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.322 [2024-06-11 15:17:06.039795] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.322 [2024-06-11 15:17:06.039808] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.322 [2024-06-11 15:17:06.039817] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.322 [2024-06-11 15:17:06.042388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.322 [2024-06-11 15:17:06.050806] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.322 [2024-06-11 15:17:06.051309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.051605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.051635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.322 [2024-06-11 15:17:06.051657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.322 [2024-06-11 15:17:06.051934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.322 [2024-06-11 15:17:06.052093] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.322 [2024-06-11 15:17:06.052106] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.322 [2024-06-11 15:17:06.052116] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.322 [2024-06-11 15:17:06.055092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.322 [2024-06-11 15:17:06.063813] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.322 [2024-06-11 15:17:06.064328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.064689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.064728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.322 [2024-06-11 15:17:06.064750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.322 [2024-06-11 15:17:06.065023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.322 [2024-06-11 15:17:06.065183] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.322 [2024-06-11 15:17:06.065195] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.322 [2024-06-11 15:17:06.065205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.322 [2024-06-11 15:17:06.067862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.322 [2024-06-11 15:17:06.076870] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.322 [2024-06-11 15:17:06.077388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.077740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.077771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.322 [2024-06-11 15:17:06.077793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.322 [2024-06-11 15:17:06.078059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.322 [2024-06-11 15:17:06.078191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.322 [2024-06-11 15:17:06.078204] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.322 [2024-06-11 15:17:06.078213] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.322 [2024-06-11 15:17:06.080890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.322 [2024-06-11 15:17:06.090072] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.322 [2024-06-11 15:17:06.090649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.090968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.090999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.322 [2024-06-11 15:17:06.091021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.322 [2024-06-11 15:17:06.091316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.322 [2024-06-11 15:17:06.091702] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.322 [2024-06-11 15:17:06.091715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.322 [2024-06-11 15:17:06.091724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.322 [2024-06-11 15:17:06.094362] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.322 [2024-06-11 15:17:06.103086] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.322 [2024-06-11 15:17:06.103589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.103941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.103971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.322 [2024-06-11 15:17:06.104000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.322 [2024-06-11 15:17:06.104183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.322 [2024-06-11 15:17:06.104381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.322 [2024-06-11 15:17:06.104394] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.322 [2024-06-11 15:17:06.104404] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.322 [2024-06-11 15:17:06.107310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.322 [2024-06-11 15:17:06.116131] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.322 [2024-06-11 15:17:06.116586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.116964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.116995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.322 [2024-06-11 15:17:06.117017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.322 [2024-06-11 15:17:06.117239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.322 [2024-06-11 15:17:06.117393] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.322 [2024-06-11 15:17:06.117406] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.322 [2024-06-11 15:17:06.117416] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.322 [2024-06-11 15:17:06.120032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.322 [2024-06-11 15:17:06.129207] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.322 [2024-06-11 15:17:06.129756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.130055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.130088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.322 [2024-06-11 15:17:06.130110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.322 [2024-06-11 15:17:06.130364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.322 [2024-06-11 15:17:06.130562] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.322 [2024-06-11 15:17:06.130575] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.322 [2024-06-11 15:17:06.130585] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.322 [2024-06-11 15:17:06.133359] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.322 [2024-06-11 15:17:06.142203] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.322 [2024-06-11 15:17:06.142728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.143018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.143063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.322 [2024-06-11 15:17:06.143086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.322 [2024-06-11 15:17:06.143524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.322 [2024-06-11 15:17:06.143807] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.322 [2024-06-11 15:17:06.143832] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.322 [2024-06-11 15:17:06.143852] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.322 [2024-06-11 15:17:06.146761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.322 [2024-06-11 15:17:06.155325] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.322 [2024-06-11 15:17:06.155853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.156164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.322 [2024-06-11 15:17:06.156197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.322 [2024-06-11 15:17:06.156229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.323 [2024-06-11 15:17:06.156404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.323 [2024-06-11 15:17:06.156579] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.323 [2024-06-11 15:17:06.156592] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.323 [2024-06-11 15:17:06.156602] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.323 [2024-06-11 15:17:06.159355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.583 [2024-06-11 15:17:06.168470] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.583 [2024-06-11 15:17:06.168998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.583 [2024-06-11 15:17:06.169298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.583 [2024-06-11 15:17:06.169330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.583 [2024-06-11 15:17:06.169352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.583 [2024-06-11 15:17:06.169681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.583 [2024-06-11 15:17:06.170037] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.583 [2024-06-11 15:17:06.170051] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.583 [2024-06-11 15:17:06.170061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.583 [2024-06-11 15:17:06.172806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.583 [2024-06-11 15:17:06.181393] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.583 [2024-06-11 15:17:06.181971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.583 [2024-06-11 15:17:06.182317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.583 [2024-06-11 15:17:06.182349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.583 [2024-06-11 15:17:06.182371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.583 [2024-06-11 15:17:06.182801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.583 [2024-06-11 15:17:06.183080] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.583 [2024-06-11 15:17:06.183094] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.583 [2024-06-11 15:17:06.183104] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.583 [2024-06-11 15:17:06.185716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.583 [2024-06-11 15:17:06.194307] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.583 [2024-06-11 15:17:06.194933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.583 [2024-06-11 15:17:06.195241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.583 [2024-06-11 15:17:06.195274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.583 [2024-06-11 15:17:06.195296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.583 [2024-06-11 15:17:06.195533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.583 [2024-06-11 15:17:06.195790] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.583 [2024-06-11 15:17:06.195807] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.583 [2024-06-11 15:17:06.195821] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.583 [2024-06-11 15:17:06.199494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.583 [2024-06-11 15:17:06.207783] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.583 [2024-06-11 15:17:06.208250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.583 [2024-06-11 15:17:06.208500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.583 [2024-06-11 15:17:06.208516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.583 [2024-06-11 15:17:06.208554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.583 [2024-06-11 15:17:06.209118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.583 [2024-06-11 15:17:06.209317] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.583 [2024-06-11 15:17:06.209330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.583 [2024-06-11 15:17:06.209339] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.583 [2024-06-11 15:17:06.211951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.583 [2024-06-11 15:17:06.220857] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.583 [2024-06-11 15:17:06.221399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.583 [2024-06-11 15:17:06.221720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.583 [2024-06-11 15:17:06.221751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.583 [2024-06-11 15:17:06.221773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.584 [2024-06-11 15:17:06.222169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.584 [2024-06-11 15:17:06.222514] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.584 [2024-06-11 15:17:06.222537] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.584 [2024-06-11 15:17:06.222547] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.584 [2024-06-11 15:17:06.225459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.584 [2024-06-11 15:17:06.233901] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.584 [2024-06-11 15:17:06.234453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.234754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.234785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.584 [2024-06-11 15:17:06.234807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.584 [2024-06-11 15:17:06.235201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.584 [2024-06-11 15:17:06.235460] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.584 [2024-06-11 15:17:06.235473] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.584 [2024-06-11 15:17:06.235482] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.584 [2024-06-11 15:17:06.238257] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.584 [2024-06-11 15:17:06.247043] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.584 [2024-06-11 15:17:06.247583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.247870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.247900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.584 [2024-06-11 15:17:06.247922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.584 [2024-06-11 15:17:06.248199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.584 [2024-06-11 15:17:06.248376] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.584 [2024-06-11 15:17:06.248389] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.584 [2024-06-11 15:17:06.248398] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.584 [2024-06-11 15:17:06.251150] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.584 [2024-06-11 15:17:06.259795] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.584 [2024-06-11 15:17:06.260330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.260682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.260713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.584 [2024-06-11 15:17:06.260735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.584 [2024-06-11 15:17:06.261129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.584 [2024-06-11 15:17:06.261412] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.584 [2024-06-11 15:17:06.261440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.584 [2024-06-11 15:17:06.261454] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.584 [2024-06-11 15:17:06.264186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.584 [2024-06-11 15:17:06.272523] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.584 [2024-06-11 15:17:06.272965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.273154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.273186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.584 [2024-06-11 15:17:06.273209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.584 [2024-06-11 15:17:06.273457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.584 [2024-06-11 15:17:06.273610] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.584 [2024-06-11 15:17:06.273623] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.584 [2024-06-11 15:17:06.273633] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.584 [2024-06-11 15:17:06.276340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.584 [2024-06-11 15:17:06.285607] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.584 [2024-06-11 15:17:06.286153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.286539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.286569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.584 [2024-06-11 15:17:06.286591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.584 [2024-06-11 15:17:06.287135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.584 [2024-06-11 15:17:06.287396] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.584 [2024-06-11 15:17:06.287409] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.584 [2024-06-11 15:17:06.287419] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.584 [2024-06-11 15:17:06.291304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.584 [2024-06-11 15:17:06.298880] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.584 [2024-06-11 15:17:06.299476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.299853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.299883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.584 [2024-06-11 15:17:06.299905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.584 [2024-06-11 15:17:06.300399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.584 [2024-06-11 15:17:06.300588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.584 [2024-06-11 15:17:06.300601] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.584 [2024-06-11 15:17:06.300611] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.584 [2024-06-11 15:17:06.303278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.584 [2024-06-11 15:17:06.311841] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.584 [2024-06-11 15:17:06.312350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.312694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.312725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.584 [2024-06-11 15:17:06.312748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.584 [2024-06-11 15:17:06.313045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.584 [2024-06-11 15:17:06.313335] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.584 [2024-06-11 15:17:06.313348] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.584 [2024-06-11 15:17:06.313357] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.584 [2024-06-11 15:17:06.315992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.584 [2024-06-11 15:17:06.325033] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.584 [2024-06-11 15:17:06.325613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.325992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.326023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.584 [2024-06-11 15:17:06.326058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.584 [2024-06-11 15:17:06.326389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.584 [2024-06-11 15:17:06.326634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.584 [2024-06-11 15:17:06.326648] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.584 [2024-06-11 15:17:06.326657] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.584 [2024-06-11 15:17:06.329298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.584 [2024-06-11 15:17:06.338018] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.584 [2024-06-11 15:17:06.338562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.338861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.584 [2024-06-11 15:17:06.338891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.584 [2024-06-11 15:17:06.338912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.584 [2024-06-11 15:17:06.339357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.584 [2024-06-11 15:17:06.339681] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.584 [2024-06-11 15:17:06.339694] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.585 [2024-06-11 15:17:06.339704] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.585 [2024-06-11 15:17:06.342276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.585 [2024-06-11 15:17:06.351083] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.585 [2024-06-11 15:17:06.351691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.352051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.352084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.585 [2024-06-11 15:17:06.352104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.585 [2024-06-11 15:17:06.352389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.585 [2024-06-11 15:17:06.352565] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.585 [2024-06-11 15:17:06.352577] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.585 [2024-06-11 15:17:06.352587] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.585 [2024-06-11 15:17:06.355493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.585 [2024-06-11 15:17:06.364058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.585 [2024-06-11 15:17:06.364533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.364872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.364887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.585 [2024-06-11 15:17:06.364898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.585 [2024-06-11 15:17:06.365168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.585 [2024-06-11 15:17:06.365344] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.585 [2024-06-11 15:17:06.365357] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.585 [2024-06-11 15:17:06.365368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.585 [2024-06-11 15:17:06.367978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.585 [2024-06-11 15:17:06.377287] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.585 [2024-06-11 15:17:06.377788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.378085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.378118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.585 [2024-06-11 15:17:06.378140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.585 [2024-06-11 15:17:06.378469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.585 [2024-06-11 15:17:06.378805] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.585 [2024-06-11 15:17:06.378818] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.585 [2024-06-11 15:17:06.378828] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.585 [2024-06-11 15:17:06.381351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.585 [2024-06-11 15:17:06.390323] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.585 [2024-06-11 15:17:06.390891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.391179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.391212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.585 [2024-06-11 15:17:06.391234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.585 [2024-06-11 15:17:06.391615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.585 [2024-06-11 15:17:06.391790] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.585 [2024-06-11 15:17:06.391803] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.585 [2024-06-11 15:17:06.391813] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.585 [2024-06-11 15:17:06.394428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.585 [2024-06-11 15:17:06.403355] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.585 [2024-06-11 15:17:06.403919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.404176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.404211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.585 [2024-06-11 15:17:06.404233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.585 [2024-06-11 15:17:06.404535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.585 [2024-06-11 15:17:06.404643] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.585 [2024-06-11 15:17:06.404657] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.585 [2024-06-11 15:17:06.404667] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.585 [2024-06-11 15:17:06.407284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.585 [2024-06-11 15:17:06.416568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.585 [2024-06-11 15:17:06.417170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.417480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.585 [2024-06-11 15:17:06.417510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.585 [2024-06-11 15:17:06.417532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.585 [2024-06-11 15:17:06.417824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.585 [2024-06-11 15:17:06.418054] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.585 [2024-06-11 15:17:06.418068] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.585 [2024-06-11 15:17:06.418078] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.585 [2024-06-11 15:17:06.420893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.846 [2024-06-11 15:17:06.429567] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.846 [2024-06-11 15:17:06.429978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.430257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.430278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.846 [2024-06-11 15:17:06.430288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.846 [2024-06-11 15:17:06.430463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.846 [2024-06-11 15:17:06.430661] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.846 [2024-06-11 15:17:06.430673] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.846 [2024-06-11 15:17:06.430683] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.846 [2024-06-11 15:17:06.433458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.846 [2024-06-11 15:17:06.442326] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.846 [2024-06-11 15:17:06.442826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.443089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.443132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.846 [2024-06-11 15:17:06.443154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.846 [2024-06-11 15:17:06.443535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.846 [2024-06-11 15:17:06.443786] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.846 [2024-06-11 15:17:06.443799] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.846 [2024-06-11 15:17:06.443809] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.846 [2024-06-11 15:17:06.446536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.846 [2024-06-11 15:17:06.455014] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.846 [2024-06-11 15:17:06.455572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.455995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.456010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.846 [2024-06-11 15:17:06.456020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.846 [2024-06-11 15:17:06.456180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.846 [2024-06-11 15:17:06.456355] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.846 [2024-06-11 15:17:06.456368] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.846 [2024-06-11 15:17:06.456378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.846 [2024-06-11 15:17:06.459401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.846 [2024-06-11 15:17:06.468118] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.846 [2024-06-11 15:17:06.468696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.468924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.468955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.846 [2024-06-11 15:17:06.468984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.846 [2024-06-11 15:17:06.469415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.846 [2024-06-11 15:17:06.469738] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.846 [2024-06-11 15:17:06.469756] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.846 [2024-06-11 15:17:06.469771] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.846 [2024-06-11 15:17:06.473673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.846 [2024-06-11 15:17:06.481753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.846 [2024-06-11 15:17:06.482355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.482655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.482685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.846 [2024-06-11 15:17:06.482707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.846 [2024-06-11 15:17:06.482984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.846 [2024-06-11 15:17:06.483165] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.846 [2024-06-11 15:17:06.483178] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.846 [2024-06-11 15:17:06.483188] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.846 [2024-06-11 15:17:06.485822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.846 [2024-06-11 15:17:06.494615] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.846 [2024-06-11 15:17:06.495180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.495524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.495554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.846 [2024-06-11 15:17:06.495577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.846 [2024-06-11 15:17:06.496004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.846 [2024-06-11 15:17:06.496134] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.846 [2024-06-11 15:17:06.496146] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.846 [2024-06-11 15:17:06.496156] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.846 [2024-06-11 15:17:06.498836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.846 [2024-06-11 15:17:06.507747] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.846 [2024-06-11 15:17:06.508207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.508560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.846 [2024-06-11 15:17:06.508590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.846 [2024-06-11 15:17:06.508613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.847 [2024-06-11 15:17:06.508826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.847 [2024-06-11 15:17:06.509061] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.847 [2024-06-11 15:17:06.509075] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.847 [2024-06-11 15:17:06.509084] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.847 [2024-06-11 15:17:06.511765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.847 [2024-06-11 15:17:06.520762] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.847 [2024-06-11 15:17:06.521318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.521669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.521699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.847 [2024-06-11 15:17:06.521720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.847 [2024-06-11 15:17:06.522001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.847 [2024-06-11 15:17:06.522313] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.847 [2024-06-11 15:17:06.522328] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.847 [2024-06-11 15:17:06.522338] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.847 [2024-06-11 15:17:06.524928] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.847 [2024-06-11 15:17:06.533739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.847 [2024-06-11 15:17:06.534308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.534660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.534690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.847 [2024-06-11 15:17:06.534712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.847 [2024-06-11 15:17:06.535091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.847 [2024-06-11 15:17:06.535268] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.847 [2024-06-11 15:17:06.535280] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.847 [2024-06-11 15:17:06.535291] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.847 [2024-06-11 15:17:06.538202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.847 [2024-06-11 15:17:06.546870] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.847 [2024-06-11 15:17:06.547347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.547602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.547633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.847 [2024-06-11 15:17:06.547655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.847 [2024-06-11 15:17:06.548051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.847 [2024-06-11 15:17:06.548317] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.847 [2024-06-11 15:17:06.548331] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.847 [2024-06-11 15:17:06.548340] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.847 [2024-06-11 15:17:06.550929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.847 [2024-06-11 15:17:06.559742] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.847 [2024-06-11 15:17:06.560315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.560696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.560726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.847 [2024-06-11 15:17:06.560747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.847 [2024-06-11 15:17:06.561143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.847 [2024-06-11 15:17:06.561340] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.847 [2024-06-11 15:17:06.561353] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.847 [2024-06-11 15:17:06.561363] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.847 [2024-06-11 15:17:06.564019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.847 [2024-06-11 15:17:06.572840] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.847 [2024-06-11 15:17:06.573422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.573777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.573808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.847 [2024-06-11 15:17:06.573829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.847 [2024-06-11 15:17:06.574223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.847 [2024-06-11 15:17:06.574411] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.847 [2024-06-11 15:17:06.574424] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.847 [2024-06-11 15:17:06.574434] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.847 [2024-06-11 15:17:06.577073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.847 [2024-06-11 15:17:06.585772] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.847 [2024-06-11 15:17:06.586234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.586577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.586608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.847 [2024-06-11 15:17:06.586630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.847 [2024-06-11 15:17:06.586960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.847 [2024-06-11 15:17:06.587260] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.847 [2024-06-11 15:17:06.587278] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.847 [2024-06-11 15:17:06.587288] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.847 [2024-06-11 15:17:06.590060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.847 [2024-06-11 15:17:06.598751] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.847 [2024-06-11 15:17:06.599325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.599643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.599674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.847 [2024-06-11 15:17:06.599695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.847 [2024-06-11 15:17:06.600091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.847 [2024-06-11 15:17:06.600301] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.847 [2024-06-11 15:17:06.600314] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.847 [2024-06-11 15:17:06.600324] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.847 [2024-06-11 15:17:06.603030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.847 [2024-06-11 15:17:06.611935] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.847 [2024-06-11 15:17:06.612495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.612796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.612827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.847 [2024-06-11 15:17:06.612849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.847 [2024-06-11 15:17:06.613124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.847 [2024-06-11 15:17:06.613256] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.847 [2024-06-11 15:17:06.613269] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.847 [2024-06-11 15:17:06.613279] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.847 [2024-06-11 15:17:06.615938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.847 [2024-06-11 15:17:06.624825] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.847 [2024-06-11 15:17:06.625276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.625655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.847 [2024-06-11 15:17:06.625686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.847 [2024-06-11 15:17:06.625708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.847 [2024-06-11 15:17:06.626054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.847 [2024-06-11 15:17:06.626271] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.847 [2024-06-11 15:17:06.626284] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.847 [2024-06-11 15:17:06.626298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.848 [2024-06-11 15:17:06.628863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.848 [2024-06-11 15:17:06.637809] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.848 [2024-06-11 15:17:06.638314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.848 [2024-06-11 15:17:06.638660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.848 [2024-06-11 15:17:06.638691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.848 [2024-06-11 15:17:06.638713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.848 [2024-06-11 15:17:06.639076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.848 [2024-06-11 15:17:06.639253] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.848 [2024-06-11 15:17:06.639266] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.848 [2024-06-11 15:17:06.639275] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.848 [2024-06-11 15:17:06.642068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.848 [2024-06-11 15:17:06.650765] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.848 [2024-06-11 15:17:06.651289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.848 [2024-06-11 15:17:06.651489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.848 [2024-06-11 15:17:06.651520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.848 [2024-06-11 15:17:06.651542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.848 [2024-06-11 15:17:06.651873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.848 [2024-06-11 15:17:06.652319] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.848 [2024-06-11 15:17:06.652345] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.848 [2024-06-11 15:17:06.652366] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.848 [2024-06-11 15:17:06.654860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.848 [2024-06-11 15:17:06.663855] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.848 [2024-06-11 15:17:06.664435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.848 [2024-06-11 15:17:06.664755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.848 [2024-06-11 15:17:06.664786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.848 [2024-06-11 15:17:06.664808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.848 [2024-06-11 15:17:06.665203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.848 [2024-06-11 15:17:06.665494] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.848 [2024-06-11 15:17:06.665506] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.848 [2024-06-11 15:17:06.665516] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.848 [2024-06-11 15:17:06.668314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.848 [2024-06-11 15:17:06.676924] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.848 [2024-06-11 15:17:06.677483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.848 [2024-06-11 15:17:06.677783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.848 [2024-06-11 15:17:06.677814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:47.848 [2024-06-11 15:17:06.677835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:47.848 [2024-06-11 15:17:06.678280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:47.848 [2024-06-11 15:17:06.678503] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.848 [2024-06-11 15:17:06.678516] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.848 [2024-06-11 15:17:06.678526] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.848 [2024-06-11 15:17:06.681322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.109 [2024-06-11 15:17:06.690233] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.109 [2024-06-11 15:17:06.690790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.691039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.691072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.109 [2024-06-11 15:17:06.691093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.109 [2024-06-11 15:17:06.691298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.109 [2024-06-11 15:17:06.691497] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.109 [2024-06-11 15:17:06.691509] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.109 [2024-06-11 15:17:06.691519] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.109 [2024-06-11 15:17:06.694358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.109 [2024-06-11 15:17:06.703425] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.109 [2024-06-11 15:17:06.704046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.704398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.704429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.109 [2024-06-11 15:17:06.704450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.109 [2024-06-11 15:17:06.704831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.109 [2024-06-11 15:17:06.705179] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.109 [2024-06-11 15:17:06.705206] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.109 [2024-06-11 15:17:06.705226] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.109 [2024-06-11 15:17:06.708202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.109 [2024-06-11 15:17:06.716459] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.109 [2024-06-11 15:17:06.716981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.717348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.717381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.109 [2024-06-11 15:17:06.717403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.109 [2024-06-11 15:17:06.717640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.109 [2024-06-11 15:17:06.717770] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.109 [2024-06-11 15:17:06.717783] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.109 [2024-06-11 15:17:06.717793] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.109 [2024-06-11 15:17:06.720273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.109 [2024-06-11 15:17:06.729425] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.109 [2024-06-11 15:17:06.729943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.730254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.730287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.109 [2024-06-11 15:17:06.730309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.109 [2024-06-11 15:17:06.730506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.109 [2024-06-11 15:17:06.730660] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.109 [2024-06-11 15:17:06.730672] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.109 [2024-06-11 15:17:06.730682] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.109 [2024-06-11 15:17:06.733433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.109 [2024-06-11 15:17:06.742475] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.109 [2024-06-11 15:17:06.743047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.743425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.743457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.109 [2024-06-11 15:17:06.743491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.109 [2024-06-11 15:17:06.743666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.109 [2024-06-11 15:17:06.743887] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.109 [2024-06-11 15:17:06.743900] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.109 [2024-06-11 15:17:06.743909] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.109 [2024-06-11 15:17:06.746751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.109 [2024-06-11 15:17:06.755378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.109 [2024-06-11 15:17:06.755949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.756312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.109 [2024-06-11 15:17:06.756347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.110 [2024-06-11 15:17:06.756371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.110 [2024-06-11 15:17:06.756554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.110 [2024-06-11 15:17:06.756684] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.110 [2024-06-11 15:17:06.756697] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.110 [2024-06-11 15:17:06.756707] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.110 [2024-06-11 15:17:06.759231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.110 [2024-06-11 15:17:06.768273] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.110 [2024-06-11 15:17:06.768728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.769095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.769130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.110 [2024-06-11 15:17:06.769152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.110 [2024-06-11 15:17:06.769533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.110 [2024-06-11 15:17:06.769806] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.110 [2024-06-11 15:17:06.769819] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.110 [2024-06-11 15:17:06.769828] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.110 [2024-06-11 15:17:06.772801] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.110 [2024-06-11 15:17:06.781567] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.110 [2024-06-11 15:17:06.782112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.782464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.782495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.110 [2024-06-11 15:17:06.782518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.110 [2024-06-11 15:17:06.782892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.110 [2024-06-11 15:17:06.783120] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.110 [2024-06-11 15:17:06.783134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.110 [2024-06-11 15:17:06.783144] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.110 [2024-06-11 15:17:06.787097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.110 [2024-06-11 15:17:06.794932] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.110 [2024-06-11 15:17:06.795462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.795803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.795841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.110 [2024-06-11 15:17:06.795863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.110 [2024-06-11 15:17:06.796257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.110 [2024-06-11 15:17:06.796542] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.110 [2024-06-11 15:17:06.796567] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.110 [2024-06-11 15:17:06.796587] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.110 [2024-06-11 15:17:06.799503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.110 [2024-06-11 15:17:06.808067] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.110 [2024-06-11 15:17:06.808600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.808883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.808922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.110 [2024-06-11 15:17:06.808933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.110 [2024-06-11 15:17:06.809100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.110 [2024-06-11 15:17:06.809344] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.110 [2024-06-11 15:17:06.809356] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.110 [2024-06-11 15:17:06.809366] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.110 [2024-06-11 15:17:06.812206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.110 [2024-06-11 15:17:06.820873] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.110 [2024-06-11 15:17:06.821274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.821652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.821684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.110 [2024-06-11 15:17:06.821706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.110 [2024-06-11 15:17:06.821946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.110 [2024-06-11 15:17:06.822151] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.110 [2024-06-11 15:17:06.822165] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.110 [2024-06-11 15:17:06.822175] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.110 [2024-06-11 15:17:06.824810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.110 [2024-06-11 15:17:06.833692] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.110 [2024-06-11 15:17:06.834247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.834625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.834655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.110 [2024-06-11 15:17:06.834690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.110 [2024-06-11 15:17:06.834997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.110 [2024-06-11 15:17:06.835204] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.110 [2024-06-11 15:17:06.835218] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.110 [2024-06-11 15:17:06.835227] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.110 [2024-06-11 15:17:06.838114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.110 [2024-06-11 15:17:06.846656] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.110 [2024-06-11 15:17:06.847132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.847510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.847541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.110 [2024-06-11 15:17:06.847571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.110 [2024-06-11 15:17:06.847724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.110 [2024-06-11 15:17:06.847877] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.110 [2024-06-11 15:17:06.847889] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.110 [2024-06-11 15:17:06.847899] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.110 [2024-06-11 15:17:06.850429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.110 [2024-06-11 15:17:06.859561] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.110 [2024-06-11 15:17:06.860108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.860484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.860515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.110 [2024-06-11 15:17:06.860537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.110 [2024-06-11 15:17:06.860817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.110 [2024-06-11 15:17:06.861127] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.110 [2024-06-11 15:17:06.861140] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.110 [2024-06-11 15:17:06.861150] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.110 [2024-06-11 15:17:06.864166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.110 [2024-06-11 15:17:06.872869] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.110 [2024-06-11 15:17:06.873474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.873774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.110 [2024-06-11 15:17:06.873806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.110 [2024-06-11 15:17:06.873829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.110 [2024-06-11 15:17:06.874063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.111 [2024-06-11 15:17:06.874195] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.111 [2024-06-11 15:17:06.874208] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.111 [2024-06-11 15:17:06.874218] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.111 [2024-06-11 15:17:06.877040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.111 [2024-06-11 15:17:06.885513] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.111 [2024-06-11 15:17:06.886044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.111 [2024-06-11 15:17:06.886346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.111 [2024-06-11 15:17:06.886378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.111 [2024-06-11 15:17:06.886401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.111 [2024-06-11 15:17:06.886684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.111 [2024-06-11 15:17:06.886840] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.111 [2024-06-11 15:17:06.886853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.111 [2024-06-11 15:17:06.886864] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.111 [2024-06-11 15:17:06.889389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.111 [2024-06-11 15:17:06.898517] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.111 [2024-06-11 15:17:06.899019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.111 [2024-06-11 15:17:06.899343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.111 [2024-06-11 15:17:06.899375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.111 [2024-06-11 15:17:06.899397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.111 [2024-06-11 15:17:06.899629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.111 [2024-06-11 15:17:06.900013] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.111 [2024-06-11 15:17:06.900032] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.111 [2024-06-11 15:17:06.900044] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.111 [2024-06-11 15:17:06.902877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.111 [2024-06-11 15:17:06.911710] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.111 [2024-06-11 15:17:06.913087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.111 [2024-06-11 15:17:06.913438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.111 [2024-06-11 15:17:06.913474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.111 [2024-06-11 15:17:06.913500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.111 [2024-06-11 15:17:06.913844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.111 [2024-06-11 15:17:06.914133] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.111 [2024-06-11 15:17:06.914147] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.111 [2024-06-11 15:17:06.914157] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.111 [2024-06-11 15:17:06.916814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.111 [2024-06-11 15:17:06.924793] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.111 [2024-06-11 15:17:06.925236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.111 [2024-06-11 15:17:06.925561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.111 [2024-06-11 15:17:06.925593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.111 [2024-06-11 15:17:06.925615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.111 [2024-06-11 15:17:06.925897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.111 [2024-06-11 15:17:06.926230] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.111 [2024-06-11 15:17:06.926244] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.111 [2024-06-11 15:17:06.926254] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.111 [2024-06-11 15:17:06.928935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.111 [2024-06-11 15:17:06.937691] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.111 [2024-06-11 15:17:06.938153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.111 [2024-06-11 15:17:06.938458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.111 [2024-06-11 15:17:06.938489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.111 [2024-06-11 15:17:06.938512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.111 [2024-06-11 15:17:06.938844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.111 [2024-06-11 15:17:06.939105] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.111 [2024-06-11 15:17:06.939119] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.111 [2024-06-11 15:17:06.939130] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.111 [2024-06-11 15:17:06.941788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.372 [2024-06-11 15:17:06.950681] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.372 [2024-06-11 15:17:06.951103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:06.951422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:06.951437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.372 [2024-06-11 15:17:06.951447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.372 [2024-06-11 15:17:06.951600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.372 [2024-06-11 15:17:06.951776] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.372 [2024-06-11 15:17:06.951793] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.372 [2024-06-11 15:17:06.951802] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.372 [2024-06-11 15:17:06.954617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.372 [2024-06-11 15:17:06.963769] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.372 [2024-06-11 15:17:06.964187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:06.964557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:06.964573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.372 [2024-06-11 15:17:06.964584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.372 [2024-06-11 15:17:06.964713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.372 [2024-06-11 15:17:06.964911] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.372 [2024-06-11 15:17:06.964923] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.372 [2024-06-11 15:17:06.964933] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.372 [2024-06-11 15:17:06.967594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.372 [2024-06-11 15:17:06.976735] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.372 [2024-06-11 15:17:06.977177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:06.977396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:06.977412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.372 [2024-06-11 15:17:06.977423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.372 [2024-06-11 15:17:06.977597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.372 [2024-06-11 15:17:06.977751] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.372 [2024-06-11 15:17:06.977764] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.372 [2024-06-11 15:17:06.977774] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.372 [2024-06-11 15:17:06.980305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.372 [2024-06-11 15:17:06.989841] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.372 [2024-06-11 15:17:06.990297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:06.990587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:06.990603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.372 [2024-06-11 15:17:06.990614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.372 [2024-06-11 15:17:06.990834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.372 [2024-06-11 15:17:06.991010] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.372 [2024-06-11 15:17:06.991022] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.372 [2024-06-11 15:17:06.991044] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.372 [2024-06-11 15:17:06.993815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.372 [2024-06-11 15:17:07.002532] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.372 [2024-06-11 15:17:07.002881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:07.003199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:07.003216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.372 [2024-06-11 15:17:07.003226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.372 [2024-06-11 15:17:07.003401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.372 [2024-06-11 15:17:07.003600] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.372 [2024-06-11 15:17:07.003613] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.372 [2024-06-11 15:17:07.003623] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.372 [2024-06-11 15:17:07.006398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.372 [2024-06-11 15:17:07.015414] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.372 [2024-06-11 15:17:07.015918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:07.016212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:07.016229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.372 [2024-06-11 15:17:07.016240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.372 [2024-06-11 15:17:07.016415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.372 [2024-06-11 15:17:07.016636] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.372 [2024-06-11 15:17:07.016649] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.372 [2024-06-11 15:17:07.016659] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.372 [2024-06-11 15:17:07.019390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.372 [2024-06-11 15:17:07.028370] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.372 [2024-06-11 15:17:07.028865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:07.029207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:07.029224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.372 [2024-06-11 15:17:07.029234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.372 [2024-06-11 15:17:07.029455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.372 [2024-06-11 15:17:07.029630] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.372 [2024-06-11 15:17:07.029643] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.372 [2024-06-11 15:17:07.029653] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.372 [2024-06-11 15:17:07.032252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.372 [2024-06-11 15:17:07.041139] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.372 [2024-06-11 15:17:07.041583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:07.041834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.372 [2024-06-11 15:17:07.041850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.373 [2024-06-11 15:17:07.041860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.373 [2024-06-11 15:17:07.042065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.373 [2024-06-11 15:17:07.042174] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.373 [2024-06-11 15:17:07.042187] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.373 [2024-06-11 15:17:07.042197] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.373 [2024-06-11 15:17:07.044833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.373 [2024-06-11 15:17:07.054046] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.373 [2024-06-11 15:17:07.054431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.054701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.054717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.373 [2024-06-11 15:17:07.054727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.373 [2024-06-11 15:17:07.054947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.373 [2024-06-11 15:17:07.055152] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.373 [2024-06-11 15:17:07.055166] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.373 [2024-06-11 15:17:07.055176] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.373 [2024-06-11 15:17:07.058014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.373 [2024-06-11 15:17:07.067023] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.373 [2024-06-11 15:17:07.067484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.067780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.067810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.373 [2024-06-11 15:17:07.067831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.373 [2024-06-11 15:17:07.068176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.373 [2024-06-11 15:17:07.068486] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.373 [2024-06-11 15:17:07.068504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.373 [2024-06-11 15:17:07.068517] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.373 [2024-06-11 15:17:07.072492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.373 [2024-06-11 15:17:07.080398] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.373 [2024-06-11 15:17:07.080857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.081239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.081272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.373 [2024-06-11 15:17:07.081293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.373 [2024-06-11 15:17:07.081529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.373 [2024-06-11 15:17:07.081704] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.373 [2024-06-11 15:17:07.081717] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.373 [2024-06-11 15:17:07.081727] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.373 [2024-06-11 15:17:07.084325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.373 [2024-06-11 15:17:07.093504] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.373 [2024-06-11 15:17:07.093992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.094303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.094335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.373 [2024-06-11 15:17:07.094357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.373 [2024-06-11 15:17:07.094529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.373 [2024-06-11 15:17:07.094661] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.373 [2024-06-11 15:17:07.094674] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.373 [2024-06-11 15:17:07.094683] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.373 [2024-06-11 15:17:07.097442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.373 [2024-06-11 15:17:07.106514] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.373 [2024-06-11 15:17:07.107048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.107421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.107452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.373 [2024-06-11 15:17:07.107476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.373 [2024-06-11 15:17:07.107729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.373 [2024-06-11 15:17:07.107884] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.373 [2024-06-11 15:17:07.107896] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.373 [2024-06-11 15:17:07.107906] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.373 [2024-06-11 15:17:07.110809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.373 [2024-06-11 15:17:07.119625] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.373 [2024-06-11 15:17:07.120149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.120425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.120456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.373 [2024-06-11 15:17:07.120477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.373 [2024-06-11 15:17:07.120858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.373 [2024-06-11 15:17:07.121308] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.373 [2024-06-11 15:17:07.121335] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.373 [2024-06-11 15:17:07.121357] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.373 [2024-06-11 15:17:07.124141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.373 [2024-06-11 15:17:07.132472] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.373 [2024-06-11 15:17:07.132943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.133268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.133302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.373 [2024-06-11 15:17:07.133324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.373 [2024-06-11 15:17:07.133655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.373 [2024-06-11 15:17:07.134201] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.373 [2024-06-11 15:17:07.134227] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.373 [2024-06-11 15:17:07.134250] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.373 [2024-06-11 15:17:07.136920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.373 [2024-06-11 15:17:07.145300] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.373 [2024-06-11 15:17:07.145807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.146163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.146196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.373 [2024-06-11 15:17:07.146219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.373 [2024-06-11 15:17:07.146492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.373 [2024-06-11 15:17:07.146668] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.373 [2024-06-11 15:17:07.146681] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.373 [2024-06-11 15:17:07.146691] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.373 [2024-06-11 15:17:07.149383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.373 [2024-06-11 15:17:07.158174] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.373 [2024-06-11 15:17:07.158635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.158937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.373 [2024-06-11 15:17:07.158975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.373 [2024-06-11 15:17:07.158998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.373 [2024-06-11 15:17:07.159344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.374 [2024-06-11 15:17:07.159701] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.374 [2024-06-11 15:17:07.159719] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.374 [2024-06-11 15:17:07.159732] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.374 [2024-06-11 15:17:07.163804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.374 [2024-06-11 15:17:07.171871] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.374 [2024-06-11 15:17:07.172349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.374 [2024-06-11 15:17:07.172589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.374 [2024-06-11 15:17:07.172619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.374 [2024-06-11 15:17:07.172641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.374 [2024-06-11 15:17:07.173083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.374 [2024-06-11 15:17:07.173369] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.374 [2024-06-11 15:17:07.173394] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.374 [2024-06-11 15:17:07.173426] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.374 [2024-06-11 15:17:07.176227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.374 [2024-06-11 15:17:07.184695] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.374 [2024-06-11 15:17:07.185189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.374 [2024-06-11 15:17:07.185538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.374 [2024-06-11 15:17:07.185569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.374 [2024-06-11 15:17:07.185591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.374 [2024-06-11 15:17:07.185939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.374 [2024-06-11 15:17:07.186144] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.374 [2024-06-11 15:17:07.186157] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.374 [2024-06-11 15:17:07.186167] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.374 [2024-06-11 15:17:07.189038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.374 [2024-06-11 15:17:07.197823] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.374 [2024-06-11 15:17:07.198369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.374 [2024-06-11 15:17:07.198726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.374 [2024-06-11 15:17:07.198757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.374 [2024-06-11 15:17:07.198786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.374 [2024-06-11 15:17:07.199079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.374 [2024-06-11 15:17:07.199413] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.374 [2024-06-11 15:17:07.199438] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.374 [2024-06-11 15:17:07.199459] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.374 [2024-06-11 15:17:07.203530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.374 [2024-06-11 15:17:07.211768] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.634 [2024-06-11 15:17:07.212248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.634 [2024-06-11 15:17:07.212524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.634 [2024-06-11 15:17:07.212540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.634 [2024-06-11 15:17:07.212550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.634 [2024-06-11 15:17:07.212748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.634 [2024-06-11 15:17:07.212901] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.634 [2024-06-11 15:17:07.212914] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.634 [2024-06-11 15:17:07.212924] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.634 [2024-06-11 15:17:07.215795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.634 [2024-06-11 15:17:07.224818] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.634 [2024-06-11 15:17:07.225358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.634 [2024-06-11 15:17:07.225667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.634 [2024-06-11 15:17:07.225698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.634 [2024-06-11 15:17:07.225720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.634 [2024-06-11 15:17:07.226032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.634 [2024-06-11 15:17:07.226278] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.634 [2024-06-11 15:17:07.226291] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.634 [2024-06-11 15:17:07.226300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.634 [2024-06-11 15:17:07.229369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.634 [2024-06-11 15:17:07.237792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.634 [2024-06-11 15:17:07.238295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.634 [2024-06-11 15:17:07.238569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.238585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.635 [2024-06-11 15:17:07.238595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.635 [2024-06-11 15:17:07.238797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.635 [2024-06-11 15:17:07.238996] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.635 [2024-06-11 15:17:07.239009] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.635 [2024-06-11 15:17:07.239019] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.635 [2024-06-11 15:17:07.241823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.635 [2024-06-11 15:17:07.250740] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.635 [2024-06-11 15:17:07.251260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.251501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.251532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.635 [2024-06-11 15:17:07.251553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.635 [2024-06-11 15:17:07.251884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.635 [2024-06-11 15:17:07.252229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.635 [2024-06-11 15:17:07.252256] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.635 [2024-06-11 15:17:07.252277] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.635 [2024-06-11 15:17:07.255276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.635 [2024-06-11 15:17:07.263674] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.635 [2024-06-11 15:17:07.264271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.264542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.264558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.635 [2024-06-11 15:17:07.264568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.635 [2024-06-11 15:17:07.264743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.635 [2024-06-11 15:17:07.264897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.635 [2024-06-11 15:17:07.264910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.635 [2024-06-11 15:17:07.264920] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.635 [2024-06-11 15:17:07.267519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.635 [2024-06-11 15:17:07.276732] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.635 [2024-06-11 15:17:07.277234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.277529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.277560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.635 [2024-06-11 15:17:07.277582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.635 [2024-06-11 15:17:07.277963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.635 [2024-06-11 15:17:07.278182] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.635 [2024-06-11 15:17:07.278196] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.635 [2024-06-11 15:17:07.278206] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.635 [2024-06-11 15:17:07.280795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.635 [2024-06-11 15:17:07.289685] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.635 [2024-06-11 15:17:07.290135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.290420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.290452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.635 [2024-06-11 15:17:07.290474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.635 [2024-06-11 15:17:07.290755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.635 [2024-06-11 15:17:07.290984] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.635 [2024-06-11 15:17:07.291002] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.635 [2024-06-11 15:17:07.291017] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.635 [2024-06-11 15:17:07.295096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.635 [2024-06-11 15:17:07.303646] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.635 [2024-06-11 15:17:07.304266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.304489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.304520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.635 [2024-06-11 15:17:07.304543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.635 [2024-06-11 15:17:07.305003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.635 [2024-06-11 15:17:07.305187] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.635 [2024-06-11 15:17:07.305201] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.635 [2024-06-11 15:17:07.305210] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.635 [2024-06-11 15:17:07.307827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.635 [2024-06-11 15:17:07.316728] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.635 [2024-06-11 15:17:07.317290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.317504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.317519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.635 [2024-06-11 15:17:07.317529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.635 [2024-06-11 15:17:07.317704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.635 [2024-06-11 15:17:07.317926] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.635 [2024-06-11 15:17:07.317944] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.635 [2024-06-11 15:17:07.317954] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.635 [2024-06-11 15:17:07.320709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.635 [2024-06-11 15:17:07.329634] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.635 [2024-06-11 15:17:07.330072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.330460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.330492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.635 [2024-06-11 15:17:07.330514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.635 [2024-06-11 15:17:07.330845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.635 [2024-06-11 15:17:07.331119] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.635 [2024-06-11 15:17:07.331134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.635 [2024-06-11 15:17:07.331144] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.635 [2024-06-11 15:17:07.333561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.635 [2024-06-11 15:17:07.342582] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.635 [2024-06-11 15:17:07.343137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.343406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.343438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.635 [2024-06-11 15:17:07.343461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.635 [2024-06-11 15:17:07.343994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.635 [2024-06-11 15:17:07.344177] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.635 [2024-06-11 15:17:07.344191] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.635 [2024-06-11 15:17:07.344201] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.635 [2024-06-11 15:17:07.346972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.635 [2024-06-11 15:17:07.355802] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.635 [2024-06-11 15:17:07.356224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.356488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.635 [2024-06-11 15:17:07.356532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.635 [2024-06-11 15:17:07.356554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.635 [2024-06-11 15:17:07.356983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.636 [2024-06-11 15:17:07.357184] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.636 [2024-06-11 15:17:07.357198] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.636 [2024-06-11 15:17:07.357212] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.636 [2024-06-11 15:17:07.360060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.636 [2024-06-11 15:17:07.368881] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.636 [2024-06-11 15:17:07.369371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.369682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.369712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.636 [2024-06-11 15:17:07.369734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.636 [2024-06-11 15:17:07.370043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.636 [2024-06-11 15:17:07.370198] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.636 [2024-06-11 15:17:07.370211] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.636 [2024-06-11 15:17:07.370221] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.636 [2024-06-11 15:17:07.373089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.636 [2024-06-11 15:17:07.381686] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.636 [2024-06-11 15:17:07.382188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.382459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.382490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.636 [2024-06-11 15:17:07.382513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.636 [2024-06-11 15:17:07.382744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.636 [2024-06-11 15:17:07.383001] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.636 [2024-06-11 15:17:07.383014] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.636 [2024-06-11 15:17:07.383031] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.636 [2024-06-11 15:17:07.385688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.636 [2024-06-11 15:17:07.394788] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.636 [2024-06-11 15:17:07.395258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.395601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.395633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.636 [2024-06-11 15:17:07.395655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.636 [2024-06-11 15:17:07.395964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.636 [2024-06-11 15:17:07.396123] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.636 [2024-06-11 15:17:07.396136] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.636 [2024-06-11 15:17:07.396146] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.636 [2024-06-11 15:17:07.398965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.636 [2024-06-11 15:17:07.407726] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.636 [2024-06-11 15:17:07.408256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.408623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.408653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.636 [2024-06-11 15:17:07.408676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.636 [2024-06-11 15:17:07.409006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.636 [2024-06-11 15:17:07.409315] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.636 [2024-06-11 15:17:07.409329] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.636 [2024-06-11 15:17:07.409339] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.636 [2024-06-11 15:17:07.412103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.636 [2024-06-11 15:17:07.420605] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.636 [2024-06-11 15:17:07.421075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.421356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.421388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.636 [2024-06-11 15:17:07.421410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.636 [2024-06-11 15:17:07.421791] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.636 [2024-06-11 15:17:07.422152] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.636 [2024-06-11 15:17:07.422169] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.636 [2024-06-11 15:17:07.422182] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.636 [2024-06-11 15:17:07.426247] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.636 [2024-06-11 15:17:07.434094] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.636 [2024-06-11 15:17:07.434572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.434945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.434977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.636 [2024-06-11 15:17:07.434999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.636 [2024-06-11 15:17:07.435324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.636 [2024-06-11 15:17:07.435523] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.636 [2024-06-11 15:17:07.435536] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.636 [2024-06-11 15:17:07.435546] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.636 [2024-06-11 15:17:07.438226] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.636 [2024-06-11 15:17:07.447096] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.636 [2024-06-11 15:17:07.447555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.447921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.447952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.636 [2024-06-11 15:17:07.447975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.636 [2024-06-11 15:17:07.448247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.636 [2024-06-11 15:17:07.448378] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.636 [2024-06-11 15:17:07.448391] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.636 [2024-06-11 15:17:07.448401] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.636 [2024-06-11 15:17:07.451378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.636 [2024-06-11 15:17:07.460309] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.636 [2024-06-11 15:17:07.460828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.461116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.461151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.636 [2024-06-11 15:17:07.461173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.636 [2024-06-11 15:17:07.461653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.636 [2024-06-11 15:17:07.462128] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.636 [2024-06-11 15:17:07.462142] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.636 [2024-06-11 15:17:07.462153] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.636 [2024-06-11 15:17:07.464517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.636 [2024-06-11 15:17:07.473390] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.636 [2024-06-11 15:17:07.473958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.474273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.636 [2024-06-11 15:17:07.474290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.636 [2024-06-11 15:17:07.474300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.636 [2024-06-11 15:17:07.474452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.636 [2024-06-11 15:17:07.474649] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.636 [2024-06-11 15:17:07.474662] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.636 [2024-06-11 15:17:07.474671] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.897 [2024-06-11 15:17:07.477381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.897 [2024-06-11 15:17:07.486206] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.897 [2024-06-11 15:17:07.486742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.897 [2024-06-11 15:17:07.487137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.897 [2024-06-11 15:17:07.487170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.897 [2024-06-11 15:17:07.487205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.897 [2024-06-11 15:17:07.487403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.897 [2024-06-11 15:17:07.487601] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.897 [2024-06-11 15:17:07.487614] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.897 [2024-06-11 15:17:07.487624] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.897 [2024-06-11 15:17:07.490284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.897 [2024-06-11 15:17:07.499099] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.897 [2024-06-11 15:17:07.499645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.897 [2024-06-11 15:17:07.499996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.897 [2024-06-11 15:17:07.500041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.897 [2024-06-11 15:17:07.500065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.897 [2024-06-11 15:17:07.500434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.897 [2024-06-11 15:17:07.500588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.897 [2024-06-11 15:17:07.500601] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.897 [2024-06-11 15:17:07.500611] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.897 [2024-06-11 15:17:07.503297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.897 [2024-06-11 15:17:07.512432] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.897 [2024-06-11 15:17:07.512910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.897 [2024-06-11 15:17:07.513392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.897 [2024-06-11 15:17:07.513435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.897 [2024-06-11 15:17:07.513447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.897 [2024-06-11 15:17:07.513699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.897 [2024-06-11 15:17:07.513988] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.897 [2024-06-11 15:17:07.514006] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.897 [2024-06-11 15:17:07.514019] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.897 [2024-06-11 15:17:07.518295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.897 [2024-06-11 15:17:07.525565] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.897 [2024-06-11 15:17:07.526168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.897 [2024-06-11 15:17:07.526502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.897 [2024-06-11 15:17:07.526540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.897 [2024-06-11 15:17:07.526562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.897 [2024-06-11 15:17:07.526843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.897 [2024-06-11 15:17:07.527188] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.897 [2024-06-11 15:17:07.527202] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.897 [2024-06-11 15:17:07.527213] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.897 [2024-06-11 15:17:07.529982] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.898 [2024-06-11 15:17:07.538627] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.898 [2024-06-11 15:17:07.539209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.539518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.539549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.898 [2024-06-11 15:17:07.539571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.898 [2024-06-11 15:17:07.539952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.898 [2024-06-11 15:17:07.540341] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.898 [2024-06-11 15:17:07.540361] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.898 [2024-06-11 15:17:07.540371] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.898 [2024-06-11 15:17:07.543032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.898 [2024-06-11 15:17:07.551628] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.898 [2024-06-11 15:17:07.552186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.552431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.552447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.898 [2024-06-11 15:17:07.552458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.898 [2024-06-11 15:17:07.552632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.898 [2024-06-11 15:17:07.552763] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.898 [2024-06-11 15:17:07.552775] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.898 [2024-06-11 15:17:07.552785] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.898 [2024-06-11 15:17:07.555515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.898 [2024-06-11 15:17:07.564904] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.898 [2024-06-11 15:17:07.565408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.565749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.565780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.898 [2024-06-11 15:17:07.565809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.898 [2024-06-11 15:17:07.566204] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.898 [2024-06-11 15:17:07.566588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.898 [2024-06-11 15:17:07.566613] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.898 [2024-06-11 15:17:07.566633] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.898 [2024-06-11 15:17:07.569332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.898 [2024-06-11 15:17:07.577866] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.898 [2024-06-11 15:17:07.578487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.578844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.578875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.898 [2024-06-11 15:17:07.578897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.898 [2024-06-11 15:17:07.579242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.898 [2024-06-11 15:17:07.579491] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.898 [2024-06-11 15:17:07.579504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.898 [2024-06-11 15:17:07.579514] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.898 [2024-06-11 15:17:07.582309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.898 [2024-06-11 15:17:07.590916] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.898 [2024-06-11 15:17:07.591447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.591870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.591901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.898 [2024-06-11 15:17:07.591923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.898 [2024-06-11 15:17:07.592169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.898 [2024-06-11 15:17:07.592476] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.898 [2024-06-11 15:17:07.592489] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.898 [2024-06-11 15:17:07.592499] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.898 [2024-06-11 15:17:07.595499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.898 [2024-06-11 15:17:07.603787] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.898 [2024-06-11 15:17:07.604312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.604603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.604634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.898 [2024-06-11 15:17:07.604655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.898 [2024-06-11 15:17:07.605056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.898 [2024-06-11 15:17:07.605442] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.898 [2024-06-11 15:17:07.605467] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.898 [2024-06-11 15:17:07.605486] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.898 [2024-06-11 15:17:07.608181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.898 [2024-06-11 15:17:07.616854] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.898 [2024-06-11 15:17:07.617431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.617788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.617819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.898 [2024-06-11 15:17:07.617841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.898 [2024-06-11 15:17:07.618337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.898 [2024-06-11 15:17:07.618726] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.898 [2024-06-11 15:17:07.618738] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.898 [2024-06-11 15:17:07.618748] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.898 [2024-06-11 15:17:07.621274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.898 [2024-06-11 15:17:07.629941] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.898 [2024-06-11 15:17:07.630499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.630929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.630959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.898 [2024-06-11 15:17:07.630982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.898 [2024-06-11 15:17:07.631328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.898 [2024-06-11 15:17:07.631617] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.898 [2024-06-11 15:17:07.631630] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.898 [2024-06-11 15:17:07.631640] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.898 [2024-06-11 15:17:07.634254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.898 [2024-06-11 15:17:07.642966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.898 [2024-06-11 15:17:07.643566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.643875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.643906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.898 [2024-06-11 15:17:07.643928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.898 [2024-06-11 15:17:07.644272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.898 [2024-06-11 15:17:07.644623] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.898 [2024-06-11 15:17:07.644649] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.898 [2024-06-11 15:17:07.644670] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.898 [2024-06-11 15:17:07.649004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.898 [2024-06-11 15:17:07.656961] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.898 [2024-06-11 15:17:07.657537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.657935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.898 [2024-06-11 15:17:07.657966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.899 [2024-06-11 15:17:07.657999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.899 [2024-06-11 15:17:07.658181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.899 [2024-06-11 15:17:07.658289] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.899 [2024-06-11 15:17:07.658302] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.899 [2024-06-11 15:17:07.658312] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.899 [2024-06-11 15:17:07.660792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3477650 Killed "${NVMF_APP[@]}" "$@" 00:31:48.899 15:17:07 -- host/bdevperf.sh@36 -- # tgt_init 00:31:48.899 15:17:07 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:48.899 15:17:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:48.899 15:17:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:48.899 15:17:07 -- common/autotest_common.sh@10 -- # set +x 00:31:48.899 [2024-06-11 15:17:07.670240] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.899 [2024-06-11 15:17:07.670814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.671170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.671205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.899 [2024-06-11 15:17:07.671229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.899 [2024-06-11 15:17:07.671463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.899 [2024-06-11 15:17:07.671735] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.899 [2024-06-11 15:17:07.671748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.899 [2024-06-11 15:17:07.671759] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.899 [2024-06-11 15:17:07.674311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.899 15:17:07 -- nvmf/common.sh@469 -- # nvmfpid=3479275 00:31:48.899 15:17:07 -- nvmf/common.sh@470 -- # waitforlisten 3479275 00:31:48.899 15:17:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:48.899 15:17:07 -- common/autotest_common.sh@819 -- # '[' -z 3479275 ']' 00:31:48.899 15:17:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.899 15:17:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:48.899 15:17:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.899 15:17:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:48.899 15:17:07 -- common/autotest_common.sh@10 -- # set +x 00:31:48.899 [2024-06-11 15:17:07.683099] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.899 [2024-06-11 15:17:07.683696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.683914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.683945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.899 [2024-06-11 15:17:07.683967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.899 [2024-06-11 15:17:07.684413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.899 [2024-06-11 15:17:07.684797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.899 [2024-06-11 15:17:07.684823] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.899 [2024-06-11 15:17:07.684842] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.899 [2024-06-11 15:17:07.687659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.899 [2024-06-11 15:17:07.696097] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.899 [2024-06-11 15:17:07.696650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.696912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.696943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.899 [2024-06-11 15:17:07.696965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.899 [2024-06-11 15:17:07.697166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.899 [2024-06-11 15:17:07.697366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.899 [2024-06-11 15:17:07.697379] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.899 [2024-06-11 15:17:07.697389] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.899 [2024-06-11 15:17:07.700032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.899 [2024-06-11 15:17:07.709073] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.899 [2024-06-11 15:17:07.709639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.709934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.709965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.899 [2024-06-11 15:17:07.709987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.899 [2024-06-11 15:17:07.710444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.899 [2024-06-11 15:17:07.710828] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.899 [2024-06-11 15:17:07.710853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.899 [2024-06-11 15:17:07.710882] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.899 [2024-06-11 15:17:07.713798] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.899 [2024-06-11 15:17:07.720365] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:48.899 [2024-06-11 15:17:07.720420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.899 [2024-06-11 15:17:07.722139] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.899 [2024-06-11 15:17:07.722670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.722966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.722997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.899 [2024-06-11 15:17:07.723021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.899 [2024-06-11 15:17:07.723269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.899 [2024-06-11 15:17:07.723514] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.899 [2024-06-11 15:17:07.723527] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.899 [2024-06-11 15:17:07.723537] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.899 [2024-06-11 15:17:07.726267] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.899 [2024-06-11 15:17:07.735239] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.899 [2024-06-11 15:17:07.735787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.736056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.899 [2024-06-11 15:17:07.736072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:48.899 [2024-06-11 15:17:07.736083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:48.899 [2024-06-11 15:17:07.736326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:48.899 [2024-06-11 15:17:07.736501] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.899 [2024-06-11 15:17:07.736514] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.899 [2024-06-11 15:17:07.736524] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.160 [2024-06-11 15:17:07.739235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.160 [2024-06-11 15:17:07.748216] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.160 [2024-06-11 15:17:07.748794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.160 [2024-06-11 15:17:07.749133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.160 [2024-06-11 15:17:07.749165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.160 [2024-06-11 15:17:07.749187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.160 [2024-06-11 15:17:07.749381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.161 [2024-06-11 15:17:07.749534] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.161 [2024-06-11 15:17:07.749551] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.161 [2024-06-11 15:17:07.749561] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.161 [2024-06-11 15:17:07.752333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.161 [2024-06-11 15:17:07.761213] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.161 [2024-06-11 15:17:07.761647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.762007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.762053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.161 [2024-06-11 15:17:07.762077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.161 [2024-06-11 15:17:07.762407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.161 [2024-06-11 15:17:07.762690] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.161 [2024-06-11 15:17:07.762715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.161 [2024-06-11 15:17:07.762743] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.161 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.161 [2024-06-11 15:17:07.765311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.161 [2024-06-11 15:17:07.774113] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.161 [2024-06-11 15:17:07.774661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.775003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.775019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.161 [2024-06-11 15:17:07.775037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.161 [2024-06-11 15:17:07.775234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.161 [2024-06-11 15:17:07.775411] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.161 [2024-06-11 15:17:07.775423] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.161 [2024-06-11 15:17:07.775433] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.161 [2024-06-11 15:17:07.778276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.161 [2024-06-11 15:17:07.786927] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.161 [2024-06-11 15:17:07.787432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.787692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.787708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.161 [2024-06-11 15:17:07.787719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.161 [2024-06-11 15:17:07.787893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.161 [2024-06-11 15:17:07.788031] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.161 [2024-06-11 15:17:07.788044] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.161 [2024-06-11 15:17:07.788059] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.161 [2024-06-11 15:17:07.790715] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.161 [2024-06-11 15:17:07.799633] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.161 [2024-06-11 15:17:07.800088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.800358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.800374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.161 [2024-06-11 15:17:07.800384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.161 [2024-06-11 15:17:07.800560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.161 [2024-06-11 15:17:07.800735] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.161 [2024-06-11 15:17:07.800748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.161 [2024-06-11 15:17:07.800757] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.161 [2024-06-11 15:17:07.803305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.161 [2024-06-11 15:17:07.808768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:49.161 [2024-06-11 15:17:07.812695] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.161 [2024-06-11 15:17:07.813172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.813496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.813512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.161 [2024-06-11 15:17:07.813522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.161 [2024-06-11 15:17:07.813698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.161 [2024-06-11 15:17:07.813895] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.161 [2024-06-11 15:17:07.813908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.161 [2024-06-11 15:17:07.813918] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.161 [2024-06-11 15:17:07.816650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.161 [2024-06-11 15:17:07.825746] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.161 [2024-06-11 15:17:07.826253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.826539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.826555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.161 [2024-06-11 15:17:07.826566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.161 [2024-06-11 15:17:07.826765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.161 [2024-06-11 15:17:07.826919] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.161 [2024-06-11 15:17:07.826933] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.161 [2024-06-11 15:17:07.826948] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.161 [2024-06-11 15:17:07.829722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.161 [2024-06-11 15:17:07.838915] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.161 [2024-06-11 15:17:07.839472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.839794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.839810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.161 [2024-06-11 15:17:07.839821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.161 [2024-06-11 15:17:07.839997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.161 [2024-06-11 15:17:07.840204] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.161 [2024-06-11 15:17:07.840218] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.161 [2024-06-11 15:17:07.840228] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.161 [2024-06-11 15:17:07.842883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.161 [2024-06-11 15:17:07.852192] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.161 [2024-06-11 15:17:07.852703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.853001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.853017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.161 [2024-06-11 15:17:07.853034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.161 [2024-06-11 15:17:07.853167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.161 [2024-06-11 15:17:07.853320] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.161 [2024-06-11 15:17:07.853334] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.161 [2024-06-11 15:17:07.853345] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.161 [2024-06-11 15:17:07.855823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.161 [2024-06-11 15:17:07.865253] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.161 [2024-06-11 15:17:07.865841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.161 [2024-06-11 15:17:07.866106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.866123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.162 [2024-06-11 15:17:07.866134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.162 [2024-06-11 15:17:07.866334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.162 [2024-06-11 15:17:07.866579] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.162 [2024-06-11 15:17:07.866592] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.162 [2024-06-11 15:17:07.866602] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.162 [2024-06-11 15:17:07.869135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.162 [2024-06-11 15:17:07.878283] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.162 [2024-06-11 15:17:07.878808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.879073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.879091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.162 [2024-06-11 15:17:07.879102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.162 [2024-06-11 15:17:07.879278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.162 [2024-06-11 15:17:07.879431] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.162 [2024-06-11 15:17:07.879445] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.162 [2024-06-11 15:17:07.879455] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.162 [2024-06-11 15:17:07.882049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.162 [2024-06-11 15:17:07.891189] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.162 [2024-06-11 15:17:07.891688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.892035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.892052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.162 [2024-06-11 15:17:07.892062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.162 [2024-06-11 15:17:07.892237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.162 [2024-06-11 15:17:07.892459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.162 [2024-06-11 15:17:07.892473] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.162 [2024-06-11 15:17:07.892483] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.162 [2024-06-11 15:17:07.894984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:49.162 [2024-06-11 15:17:07.895118] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.162 [2024-06-11 15:17:07.895131] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.162 [2024-06-11 15:17:07.895144] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.162 [2024-06-11 15:17:07.895154] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.162 [2024-06-11 15:17:07.895195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.162 [2024-06-11 15:17:07.895308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:49.162 [2024-06-11 15:17:07.895310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.162 [2024-06-11 15:17:07.904189] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.162 [2024-06-11 15:17:07.904587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.904933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.904950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.162 [2024-06-11 15:17:07.904961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.162 [2024-06-11 15:17:07.905149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.162 [2024-06-11 15:17:07.905281] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.162 [2024-06-11 15:17:07.905294] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.162 [2024-06-11 15:17:07.905304] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.162 [2024-06-11 15:17:07.908370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.162 [2024-06-11 15:17:07.917330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.162 [2024-06-11 15:17:07.917815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.918017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.918039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.162 [2024-06-11 15:17:07.918051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.162 [2024-06-11 15:17:07.918250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.162 [2024-06-11 15:17:07.918382] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.162 [2024-06-11 15:17:07.918395] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.162 [2024-06-11 15:17:07.918405] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.162 [2024-06-11 15:17:07.921177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.162 [2024-06-11 15:17:07.930242] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.162 [2024-06-11 15:17:07.930841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.931200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.931218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.162 [2024-06-11 15:17:07.931230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.162 [2024-06-11 15:17:07.931406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.162 [2024-06-11 15:17:07.931560] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.162 [2024-06-11 15:17:07.931573] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.162 [2024-06-11 15:17:07.931584] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.162 [2024-06-11 15:17:07.934292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.162 [2024-06-11 15:17:07.943238] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.162 [2024-06-11 15:17:07.943821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.944114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.944131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.162 [2024-06-11 15:17:07.944142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.162 [2024-06-11 15:17:07.944363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.162 [2024-06-11 15:17:07.944569] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.162 [2024-06-11 15:17:07.944582] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.162 [2024-06-11 15:17:07.944592] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.162 [2024-06-11 15:17:07.947192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.162 [2024-06-11 15:17:07.956211] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.162 [2024-06-11 15:17:07.956777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.957093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.957109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.162 [2024-06-11 15:17:07.957121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.162 [2024-06-11 15:17:07.957298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.162 [2024-06-11 15:17:07.957451] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.162 [2024-06-11 15:17:07.957465] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.162 [2024-06-11 15:17:07.957476] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.162 [2024-06-11 15:17:07.959953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.162 [2024-06-11 15:17:07.969476] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.162 [2024-06-11 15:17:07.970032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.970363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.162 [2024-06-11 15:17:07.970379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.162 [2024-06-11 15:17:07.970390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.163 [2024-06-11 15:17:07.970542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.163 [2024-06-11 15:17:07.970762] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.163 [2024-06-11 15:17:07.970774] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.163 [2024-06-11 15:17:07.970784] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.163 [2024-06-11 15:17:07.973242] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.163 [2024-06-11 15:17:07.982629] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.163 [2024-06-11 15:17:07.983077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.163 [2024-06-11 15:17:07.983349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.163 [2024-06-11 15:17:07.983365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.163 [2024-06-11 15:17:07.983375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.163 [2024-06-11 15:17:07.983550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.163 [2024-06-11 15:17:07.983727] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.163 [2024-06-11 15:17:07.983745] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.163 [2024-06-11 15:17:07.983755] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.163 [2024-06-11 15:17:07.986620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.163 [2024-06-11 15:17:07.995560] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.163 [2024-06-11 15:17:07.996033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.163 [2024-06-11 15:17:07.996352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.163 [2024-06-11 15:17:07.996368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.163 [2024-06-11 15:17:07.996379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.163 [2024-06-11 15:17:07.996532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.163 [2024-06-11 15:17:07.996707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.163 [2024-06-11 15:17:07.996720] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.163 [2024-06-11 15:17:07.996730] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.163 [2024-06-11 15:17:07.999551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.423 [2024-06-11 15:17:08.008828] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.423 [2024-06-11 15:17:08.009273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.423 [2024-06-11 15:17:08.009566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.423 [2024-06-11 15:17:08.009582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.423 [2024-06-11 15:17:08.009593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.423 [2024-06-11 15:17:08.009769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.423 [2024-06-11 15:17:08.009922] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.423 [2024-06-11 15:17:08.009935] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.423 [2024-06-11 15:17:08.009946] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.423 [2024-06-11 15:17:08.012933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.423 [2024-06-11 15:17:08.021575] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.423 [2024-06-11 15:17:08.021998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.423 [2024-06-11 15:17:08.022347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.423 [2024-06-11 15:17:08.022363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.424 [2024-06-11 15:17:08.022375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.424 [2024-06-11 15:17:08.022527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.424 [2024-06-11 15:17:08.022702] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.424 [2024-06-11 15:17:08.022715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.424 [2024-06-11 15:17:08.022729] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.424 [2024-06-11 15:17:08.025460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.424 [2024-06-11 15:17:08.034469] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.424 [2024-06-11 15:17:08.034993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.035257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.035273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.424 [2024-06-11 15:17:08.035284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.424 [2024-06-11 15:17:08.035437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.424 [2024-06-11 15:17:08.035634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.424 [2024-06-11 15:17:08.035646] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.424 [2024-06-11 15:17:08.035657] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.424 [2024-06-11 15:17:08.038430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.424 [2024-06-11 15:17:08.047214] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.424 [2024-06-11 15:17:08.047715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.047983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.047999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.424 [2024-06-11 15:17:08.048009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.424 [2024-06-11 15:17:08.048146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.424 [2024-06-11 15:17:08.048276] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.424 [2024-06-11 15:17:08.048289] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.424 [2024-06-11 15:17:08.048299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.424 [2024-06-11 15:17:08.050822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.424 [2024-06-11 15:17:08.060347] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.424 [2024-06-11 15:17:08.060873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.061161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.061178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.424 [2024-06-11 15:17:08.061188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.424 [2024-06-11 15:17:08.061342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.424 [2024-06-11 15:17:08.061518] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.424 [2024-06-11 15:17:08.061532] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.424 [2024-06-11 15:17:08.061542] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.424 [2024-06-11 15:17:08.064092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.424 [2024-06-11 15:17:08.073357] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.424 [2024-06-11 15:17:08.073783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.074103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.074120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.424 [2024-06-11 15:17:08.074130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.424 [2024-06-11 15:17:08.074329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.424 [2024-06-11 15:17:08.074550] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.424 [2024-06-11 15:17:08.074563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.424 [2024-06-11 15:17:08.074573] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.424 [2024-06-11 15:17:08.077548] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.424 [2024-06-11 15:17:08.086263] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.424 [2024-06-11 15:17:08.086832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.087162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.087179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.424 [2024-06-11 15:17:08.087190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.424 [2024-06-11 15:17:08.087343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.424 [2024-06-11 15:17:08.087563] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.424 [2024-06-11 15:17:08.087576] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.424 [2024-06-11 15:17:08.087585] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.424 [2024-06-11 15:17:08.090156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.424 [2024-06-11 15:17:08.099013] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.424 [2024-06-11 15:17:08.099482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.099832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.099847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.424 [2024-06-11 15:17:08.099857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.424 [2024-06-11 15:17:08.099986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.424 [2024-06-11 15:17:08.100168] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.424 [2024-06-11 15:17:08.100183] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.424 [2024-06-11 15:17:08.100193] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.424 [2024-06-11 15:17:08.102959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.424 [2024-06-11 15:17:08.112198] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.424 [2024-06-11 15:17:08.112763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.113048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.113065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.424 [2024-06-11 15:17:08.113075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.424 [2024-06-11 15:17:08.113228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.424 [2024-06-11 15:17:08.113448] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.424 [2024-06-11 15:17:08.113461] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.424 [2024-06-11 15:17:08.113471] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.424 [2024-06-11 15:17:08.116131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.424 [2024-06-11 15:17:08.125057] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.424 [2024-06-11 15:17:08.125604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.125879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.125895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.424 [2024-06-11 15:17:08.125905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.424 [2024-06-11 15:17:08.126041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.424 [2024-06-11 15:17:08.126194] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.424 [2024-06-11 15:17:08.126208] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.424 [2024-06-11 15:17:08.126217] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.424 [2024-06-11 15:17:08.128804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.424 [2024-06-11 15:17:08.137841] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.424 [2024-06-11 15:17:08.138375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.138641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.424 [2024-06-11 15:17:08.138657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.424 [2024-06-11 15:17:08.138667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.424 [2024-06-11 15:17:08.138864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.424 [2024-06-11 15:17:08.139047] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.425 [2024-06-11 15:17:08.139060] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.425 [2024-06-11 15:17:08.139070] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.425 [2024-06-11 15:17:08.141902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.425 [2024-06-11 15:17:08.150578] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.425 [2024-06-11 15:17:08.151099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.151425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.151441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.425 [2024-06-11 15:17:08.151451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.425 [2024-06-11 15:17:08.151604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.425 [2024-06-11 15:17:08.151848] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.425 [2024-06-11 15:17:08.151861] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.425 [2024-06-11 15:17:08.151871] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.425 [2024-06-11 15:17:08.154821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.425 [2024-06-11 15:17:08.163206] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.425 [2024-06-11 15:17:08.163703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.164018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.164041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.425 [2024-06-11 15:17:08.164052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.425 [2024-06-11 15:17:08.164250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.425 [2024-06-11 15:17:08.164381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.425 [2024-06-11 15:17:08.164394] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.425 [2024-06-11 15:17:08.164405] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.425 [2024-06-11 15:17:08.167083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.425 [2024-06-11 15:17:08.176213] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.425 [2024-06-11 15:17:08.176739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.177079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.177096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.425 [2024-06-11 15:17:08.177107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.425 [2024-06-11 15:17:08.177237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.425 [2024-06-11 15:17:08.177389] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.425 [2024-06-11 15:17:08.177402] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.425 [2024-06-11 15:17:08.177412] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.425 [2024-06-11 15:17:08.180295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.425 [2024-06-11 15:17:08.189089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.425 [2024-06-11 15:17:08.189539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.189792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.189807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.425 [2024-06-11 15:17:08.189817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.425 [2024-06-11 15:17:08.190014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.425 [2024-06-11 15:17:08.190197] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.425 [2024-06-11 15:17:08.190211] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.425 [2024-06-11 15:17:08.190221] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.425 [2024-06-11 15:17:08.192967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.425 [2024-06-11 15:17:08.201959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.425 [2024-06-11 15:17:08.202501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.202763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.202779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.425 [2024-06-11 15:17:08.202789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.425 [2024-06-11 15:17:08.202896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.425 [2024-06-11 15:17:08.203055] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.425 [2024-06-11 15:17:08.203069] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.425 [2024-06-11 15:17:08.203079] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.425 [2024-06-11 15:17:08.205937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.425 [2024-06-11 15:17:08.214967] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.425 [2024-06-11 15:17:08.215445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.215790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.215806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.425 [2024-06-11 15:17:08.215816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.425 [2024-06-11 15:17:08.215991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.425 [2024-06-11 15:17:08.216128] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.425 [2024-06-11 15:17:08.216141] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.425 [2024-06-11 15:17:08.216151] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.425 [2024-06-11 15:17:08.218852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.425 [2024-06-11 15:17:08.227749] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.425 [2024-06-11 15:17:08.228219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.228564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.228579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.425 [2024-06-11 15:17:08.228594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.425 [2024-06-11 15:17:08.228814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.425 [2024-06-11 15:17:08.228991] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.425 [2024-06-11 15:17:08.229004] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.425 [2024-06-11 15:17:08.229013] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.425 [2024-06-11 15:17:08.231787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.425 [2024-06-11 15:17:08.240885] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.425 [2024-06-11 15:17:08.241270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.241614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.241629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.425 [2024-06-11 15:17:08.241640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.425 [2024-06-11 15:17:08.241791] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.425 [2024-06-11 15:17:08.241898] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.425 [2024-06-11 15:17:08.241911] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.425 [2024-06-11 15:17:08.241921] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.425 [2024-06-11 15:17:08.244290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.425 [2024-06-11 15:17:08.253769] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.425 [2024-06-11 15:17:08.254191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.254537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.425 [2024-06-11 15:17:08.254553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.425 [2024-06-11 15:17:08.254564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.425 [2024-06-11 15:17:08.254740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.425 [2024-06-11 15:17:08.254870] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.425 [2024-06-11 15:17:08.254883] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.425 [2024-06-11 15:17:08.254893] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.425 [2024-06-11 15:17:08.257601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.686 [2024-06-11 15:17:08.266570] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.686 [2024-06-11 15:17:08.267112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.686 [2024-06-11 15:17:08.267366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.686 [2024-06-11 15:17:08.267381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.686 [2024-06-11 15:17:08.267392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.686 [2024-06-11 15:17:08.267593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.686 [2024-06-11 15:17:08.267769] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.686 [2024-06-11 15:17:08.267782] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.686 [2024-06-11 15:17:08.267792] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.686 [2024-06-11 15:17:08.270612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.686 [2024-06-11 15:17:08.279631] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.686 [2024-06-11 15:17:08.280194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.686 [2024-06-11 15:17:08.280535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.686 [2024-06-11 15:17:08.280551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.686 [2024-06-11 15:17:08.280562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.686 [2024-06-11 15:17:08.280736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.686 [2024-06-11 15:17:08.280956] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.686 [2024-06-11 15:17:08.280970] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.686 [2024-06-11 15:17:08.280979] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.686 [2024-06-11 15:17:08.283772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.686 [2024-06-11 15:17:08.292453] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.686 [2024-06-11 15:17:08.293000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.686 [2024-06-11 15:17:08.293322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.686 [2024-06-11 15:17:08.293338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.686 [2024-06-11 15:17:08.293348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.686 [2024-06-11 15:17:08.293455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.686 [2024-06-11 15:17:08.293653] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.686 [2024-06-11 15:17:08.293666] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.686 [2024-06-11 15:17:08.293676] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.686 [2024-06-11 15:17:08.296583] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.686 [2024-06-11 15:17:08.305619] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.686 [2024-06-11 15:17:08.306097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.686 [2024-06-11 15:17:08.306441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.686 [2024-06-11 15:17:08.306457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.686 [2024-06-11 15:17:08.306467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.686 [2024-06-11 15:17:08.306601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.686 [2024-06-11 15:17:08.306776] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.686 [2024-06-11 15:17:08.306789] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.686 [2024-06-11 15:17:08.306799] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.686 [2024-06-11 15:17:08.309572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.686 [2024-06-11 15:17:08.318444] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.686 [2024-06-11 15:17:08.318883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.686 [2024-06-11 15:17:08.319225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.686 [2024-06-11 15:17:08.319242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.686 [2024-06-11 15:17:08.319253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.686 [2024-06-11 15:17:08.319384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.686 [2024-06-11 15:17:08.319603] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.686 [2024-06-11 15:17:08.319616] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.686 [2024-06-11 15:17:08.319625] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.687 [2024-06-11 15:17:08.322149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.687 [2024-06-11 15:17:08.331260] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.687 [2024-06-11 15:17:08.331695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.332039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.332055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.687 [2024-06-11 15:17:08.332066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.687 [2024-06-11 15:17:08.332218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.687 [2024-06-11 15:17:08.332438] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.687 [2024-06-11 15:17:08.332451] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.687 [2024-06-11 15:17:08.332461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.687 [2024-06-11 15:17:08.335341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.687 [2024-06-11 15:17:08.344245] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.687 [2024-06-11 15:17:08.344745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.345001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.345017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.687 [2024-06-11 15:17:08.345033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.687 [2024-06-11 15:17:08.345207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.687 [2024-06-11 15:17:08.345386] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.687 [2024-06-11 15:17:08.345399] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.687 [2024-06-11 15:17:08.345409] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.687 [2024-06-11 15:17:08.348021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.687 [2024-06-11 15:17:08.357356] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.687 [2024-06-11 15:17:08.357786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.358079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.358097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.687 [2024-06-11 15:17:08.358107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.687 [2024-06-11 15:17:08.358329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.687 [2024-06-11 15:17:08.358482] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.687 [2024-06-11 15:17:08.358495] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.687 [2024-06-11 15:17:08.358504] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.687 [2024-06-11 15:17:08.361277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.687 [2024-06-11 15:17:08.370585] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.687 [2024-06-11 15:17:08.371126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.371390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.371406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.687 [2024-06-11 15:17:08.371416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.687 [2024-06-11 15:17:08.371614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.687 [2024-06-11 15:17:08.371745] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.687 [2024-06-11 15:17:08.371758] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.687 [2024-06-11 15:17:08.371768] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.687 [2024-06-11 15:17:08.374293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.687 [2024-06-11 15:17:08.383609] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.687 [2024-06-11 15:17:08.384201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.384524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.384540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.687 [2024-06-11 15:17:08.384550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.687 [2024-06-11 15:17:08.384703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.687 [2024-06-11 15:17:08.384832] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.687 [2024-06-11 15:17:08.384844] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.687 [2024-06-11 15:17:08.384858] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.687 [2024-06-11 15:17:08.387653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.687 [2024-06-11 15:17:08.396879] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.687 [2024-06-11 15:17:08.397429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.397694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.397710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.687 [2024-06-11 15:17:08.397720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.687 [2024-06-11 15:17:08.397872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.687 [2024-06-11 15:17:08.398074] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.687 [2024-06-11 15:17:08.398087] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.687 [2024-06-11 15:17:08.398097] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.687 [2024-06-11 15:17:08.400799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.687 [2024-06-11 15:17:08.409943] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.687 [2024-06-11 15:17:08.410476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.410820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.410835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.687 [2024-06-11 15:17:08.410846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.687 [2024-06-11 15:17:08.411021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.687 [2024-06-11 15:17:08.411181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.687 [2024-06-11 15:17:08.411193] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.687 [2024-06-11 15:17:08.411203] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.687 [2024-06-11 15:17:08.413842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.687 [2024-06-11 15:17:08.423058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.687 [2024-06-11 15:17:08.423651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.423911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.423927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.687 [2024-06-11 15:17:08.423938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.687 [2024-06-11 15:17:08.424095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.687 [2024-06-11 15:17:08.424293] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.687 [2024-06-11 15:17:08.424306] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.687 [2024-06-11 15:17:08.424319] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.687 [2024-06-11 15:17:08.426956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.687 [2024-06-11 15:17:08.435933] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.687 [2024-06-11 15:17:08.436367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.436683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.436698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.687 [2024-06-11 15:17:08.436709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.687 [2024-06-11 15:17:08.436929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.687 [2024-06-11 15:17:08.437086] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.687 [2024-06-11 15:17:08.437099] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.687 [2024-06-11 15:17:08.437109] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.687 [2024-06-11 15:17:08.439675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.687 [2024-06-11 15:17:08.448971] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.687 [2024-06-11 15:17:08.449421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.449781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.687 [2024-06-11 15:17:08.449796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.688 [2024-06-11 15:17:08.449806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.688 [2024-06-11 15:17:08.449982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.688 [2024-06-11 15:17:08.450141] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.688 [2024-06-11 15:17:08.450155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.688 [2024-06-11 15:17:08.450165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.688 [2024-06-11 15:17:08.453029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.688 [2024-06-11 15:17:08.461980] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.688 [2024-06-11 15:17:08.462428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.688 [2024-06-11 15:17:08.462643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.688 [2024-06-11 15:17:08.462659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.688 [2024-06-11 15:17:08.462669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.688 [2024-06-11 15:17:08.462844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.688 [2024-06-11 15:17:08.463019] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.688 [2024-06-11 15:17:08.463040] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.688 [2024-06-11 15:17:08.463050] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.688 [2024-06-11 15:17:08.465796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.688 [2024-06-11 15:17:08.474849] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.688 [2024-06-11 15:17:08.475352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.688 [2024-06-11 15:17:08.475618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.688 [2024-06-11 15:17:08.475634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.688 [2024-06-11 15:17:08.475644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.688 [2024-06-11 15:17:08.475796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.688 [2024-06-11 15:17:08.475971] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.688 [2024-06-11 15:17:08.475983] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.688 [2024-06-11 15:17:08.475993] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.688 [2024-06-11 15:17:08.478769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.688 [2024-06-11 15:17:08.487986] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.688 [2024-06-11 15:17:08.488457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.688 [2024-06-11 15:17:08.488803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.688 [2024-06-11 15:17:08.488819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.688 [2024-06-11 15:17:08.488829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.688 [2024-06-11 15:17:08.489005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.688 [2024-06-11 15:17:08.489141] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.688 [2024-06-11 15:17:08.489155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.688 [2024-06-11 15:17:08.489165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.688 [2024-06-11 15:17:08.492002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.688 [2024-06-11 15:17:08.500986] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.688 [2024-06-11 15:17:08.501502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.688 [2024-06-11 15:17:08.501761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.688 [2024-06-11 15:17:08.501777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.688 [2024-06-11 15:17:08.501788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.688 [2024-06-11 15:17:08.501986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.688 [2024-06-11 15:17:08.502121] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.688 [2024-06-11 15:17:08.502135] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.688 [2024-06-11 15:17:08.502144] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.688 [2024-06-11 15:17:08.504892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.688 [2024-06-11 15:17:08.513835] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.688 [2024-06-11 15:17:08.514259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.688 [2024-06-11 15:17:08.514577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.688 [2024-06-11 15:17:08.514592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.688 [2024-06-11 15:17:08.514603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.688 [2024-06-11 15:17:08.514822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.688 [2024-06-11 15:17:08.515021] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.688 [2024-06-11 15:17:08.515039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.688 [2024-06-11 15:17:08.515049] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.688 [2024-06-11 15:17:08.517728] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.949 [2024-06-11 15:17:08.526715] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.949 [2024-06-11 15:17:08.527186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.527511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.527526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.949 [2024-06-11 15:17:08.527536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.949 [2024-06-11 15:17:08.527643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.949 [2024-06-11 15:17:08.527796] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.949 [2024-06-11 15:17:08.527808] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.949 [2024-06-11 15:17:08.527818] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.949 [2024-06-11 15:17:08.530774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.949 [2024-06-11 15:17:08.539525] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.949 [2024-06-11 15:17:08.539940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.540299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.540315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.949 [2024-06-11 15:17:08.540326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.949 [2024-06-11 15:17:08.540501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.949 [2024-06-11 15:17:08.540654] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.949 [2024-06-11 15:17:08.540667] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.949 [2024-06-11 15:17:08.540677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.949 [2024-06-11 15:17:08.543361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.949 [2024-06-11 15:17:08.552615] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.949 [2024-06-11 15:17:08.553042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.553316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.553332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.949 [2024-06-11 15:17:08.553342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.949 [2024-06-11 15:17:08.553540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.949 [2024-06-11 15:17:08.553738] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.949 [2024-06-11 15:17:08.553751] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.949 [2024-06-11 15:17:08.553761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.949 [2024-06-11 15:17:08.556490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.949 [2024-06-11 15:17:08.565868] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.949 [2024-06-11 15:17:08.566345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.566691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.566707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.949 [2024-06-11 15:17:08.566717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.949 [2024-06-11 15:17:08.566823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.949 [2024-06-11 15:17:08.567021] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.949 [2024-06-11 15:17:08.567041] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.949 [2024-06-11 15:17:08.567052] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.949 [2024-06-11 15:17:08.569820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.949 [2024-06-11 15:17:08.578774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.949 [2024-06-11 15:17:08.579266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.579534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.579550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.949 [2024-06-11 15:17:08.579561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.949 [2024-06-11 15:17:08.579712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.949 [2024-06-11 15:17:08.579864] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.949 [2024-06-11 15:17:08.579878] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.949 [2024-06-11 15:17:08.579887] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.949 [2024-06-11 15:17:08.582481] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.949 [2024-06-11 15:17:08.591721] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.949 [2024-06-11 15:17:08.592097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.592368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.592384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.949 [2024-06-11 15:17:08.592399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.949 [2024-06-11 15:17:08.592528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.949 [2024-06-11 15:17:08.592680] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.949 [2024-06-11 15:17:08.592694] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.949 [2024-06-11 15:17:08.592704] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.949 [2024-06-11 15:17:08.595459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.949 [2024-06-11 15:17:08.604892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.949 [2024-06-11 15:17:08.605316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.605587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.949 [2024-06-11 15:17:08.605603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.949 [2024-06-11 15:17:08.605613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.949 [2024-06-11 15:17:08.605766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.949 [2024-06-11 15:17:08.605940] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.949 [2024-06-11 15:17:08.605953] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.949 [2024-06-11 15:17:08.605963] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.950 [2024-06-11 15:17:08.608804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.950 [2024-06-11 15:17:08.617922] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.950 [2024-06-11 15:17:08.618466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.618714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.618730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.950 [2024-06-11 15:17:08.618740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.950 [2024-06-11 15:17:08.618937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.950 [2024-06-11 15:17:08.619142] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.950 [2024-06-11 15:17:08.619155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.950 [2024-06-11 15:17:08.619165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.950 [2024-06-11 15:17:08.621845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.950 [2024-06-11 15:17:08.630867] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.950 [2024-06-11 15:17:08.631317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.631528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.631544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.950 [2024-06-11 15:17:08.631559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.950 [2024-06-11 15:17:08.631755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.950 [2024-06-11 15:17:08.631976] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.950 [2024-06-11 15:17:08.631988] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.950 [2024-06-11 15:17:08.631998] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.950 [2024-06-11 15:17:08.634820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.950 [2024-06-11 15:17:08.644054] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.950 [2024-06-11 15:17:08.644481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.644826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.644841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.950 [2024-06-11 15:17:08.644851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.950 [2024-06-11 15:17:08.645054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.950 [2024-06-11 15:17:08.645162] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.950 [2024-06-11 15:17:08.645175] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.950 [2024-06-11 15:17:08.645186] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.950 [2024-06-11 15:17:08.647864] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.950 [2024-06-11 15:17:08.657066] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.950 [2024-06-11 15:17:08.657722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.657996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.658012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.950 [2024-06-11 15:17:08.658022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.950 [2024-06-11 15:17:08.658203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.950 [2024-06-11 15:17:08.658402] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.950 [2024-06-11 15:17:08.658414] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.950 [2024-06-11 15:17:08.658424] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.950 15:17:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:49.950 15:17:08 -- common/autotest_common.sh@852 -- # return 0 00:31:49.950 15:17:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:49.950 15:17:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:49.950 [2024-06-11 15:17:08.661198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.950 15:17:08 -- common/autotest_common.sh@10 -- # set +x 00:31:49.950 [2024-06-11 15:17:08.670129] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.950 [2024-06-11 15:17:08.670598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.670865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.670885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.950 [2024-06-11 15:17:08.670896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.950 [2024-06-11 15:17:08.671077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.950 [2024-06-11 15:17:08.671230] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.950 [2024-06-11 15:17:08.671244] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.950 [2024-06-11 15:17:08.671255] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.950 [2024-06-11 15:17:08.674166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.950 [2024-06-11 15:17:08.683210] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.950 [2024-06-11 15:17:08.683512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.683838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.683853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.950 [2024-06-11 15:17:08.683863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.950 [2024-06-11 15:17:08.684044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.950 [2024-06-11 15:17:08.684241] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.950 [2024-06-11 15:17:08.684255] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.950 [2024-06-11 15:17:08.684264] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.950 [2024-06-11 15:17:08.686878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.950 15:17:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.950 [2024-06-11 15:17:08.696541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.950 15:17:08 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:49.950 15:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.950 [2024-06-11 15:17:08.697063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 15:17:08 -- common/autotest_common.sh@10 -- # set +x 00:31:49.950 [2024-06-11 15:17:08.697282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.697299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.950 [2024-06-11 15:17:08.697310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.950 [2024-06-11 15:17:08.697508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.950 [2024-06-11 15:17:08.697638] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.950 [2024-06-11 15:17:08.697653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.950 [2024-06-11 15:17:08.697665] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.950 [2024-06-11 15:17:08.700466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.950 [2024-06-11 15:17:08.702221] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.950 15:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.950 15:17:08 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:49.950 15:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.950 15:17:08 -- common/autotest_common.sh@10 -- # set +x 00:31:49.950 [2024-06-11 15:17:08.709451] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.950 [2024-06-11 15:17:08.709834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.710102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.710119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.950 [2024-06-11 15:17:08.710129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.950 [2024-06-11 15:17:08.710281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.950 [2024-06-11 15:17:08.710479] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.950 [2024-06-11 15:17:08.710492] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.950 [2024-06-11 15:17:08.710501] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.950 [2024-06-11 15:17:08.713107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.950 [2024-06-11 15:17:08.722158] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.950 [2024-06-11 15:17:08.722570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.950 [2024-06-11 15:17:08.722834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.951 [2024-06-11 15:17:08.722849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.951 [2024-06-11 15:17:08.722860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.951 [2024-06-11 15:17:08.723041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.951 [2024-06-11 15:17:08.723240] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.951 [2024-06-11 15:17:08.723253] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.951 [2024-06-11 15:17:08.723262] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.951 [2024-06-11 15:17:08.726104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.951 [2024-06-11 15:17:08.735404] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.951 [2024-06-11 15:17:08.735921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.951 [2024-06-11 15:17:08.736187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.951 [2024-06-11 15:17:08.736204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.951 [2024-06-11 15:17:08.736215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.951 [2024-06-11 15:17:08.736369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.951 [2024-06-11 15:17:08.736498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.951 [2024-06-11 15:17:08.736511] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.951 [2024-06-11 15:17:08.736521] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.951 [2024-06-11 15:17:08.739343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.951 Malloc0 00:31:49.951 15:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.951 15:17:08 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:49.951 15:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.951 15:17:08 -- common/autotest_common.sh@10 -- # set +x 00:31:49.951 [2024-06-11 15:17:08.748178] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.951 [2024-06-11 15:17:08.748650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.951 [2024-06-11 15:17:08.748869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.951 [2024-06-11 15:17:08.748885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.951 [2024-06-11 15:17:08.748895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.951 [2024-06-11 15:17:08.749100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.951 [2024-06-11 15:17:08.749277] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.951 [2024-06-11 15:17:08.749289] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.951 [2024-06-11 15:17:08.749298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.951 [2024-06-11 15:17:08.752053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.951 15:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.951 15:17:08 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:49.951 15:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.951 15:17:08 -- common/autotest_common.sh@10 -- # set +x 00:31:49.951 [2024-06-11 15:17:08.761328] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.951 [2024-06-11 15:17:08.761820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.951 [2024-06-11 15:17:08.762037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.951 [2024-06-11 15:17:08.762054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114ae40 with addr=10.0.0.2, port=4420 00:31:49.951 [2024-06-11 15:17:08.762064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114ae40 is same with the state(5) to be set 00:31:49.951 [2024-06-11 15:17:08.762217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ae40 (9): Bad file descriptor 00:31:49.951 [2024-06-11 15:17:08.762391] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.951 [2024-06-11 15:17:08.762404] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.951 [2024-06-11 15:17:08.762414] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.951 15:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.951 15:17:08 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.951 15:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.951 [2024-06-11 15:17:08.765055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.951 15:17:08 -- common/autotest_common.sh@10 -- # set +x 00:31:49.951 [2024-06-11 15:17:08.767759] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.951 15:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.951 15:17:08 -- host/bdevperf.sh@38 -- # wait 3478209 00:31:49.951 [2024-06-11 15:17:08.774395] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.210 [2024-06-11 15:17:08.810170] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:58.330 00:31:58.330 Latency(us) 00:31:58.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.330 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:58.330 Verification LBA range: start 0x0 length 0x4000 00:31:58.330 Nvme1n1 : 15.01 8386.97 32.76 12510.12 0.00 6106.87 1005.38 20852.36 00:31:58.330 =================================================================================================================== 00:31:58.330 Total : 8386.97 32.76 12510.12 0.00 6106.87 1005.38 20852.36 00:31:58.590 15:17:17 -- host/bdevperf.sh@39 -- # sync 00:31:58.590 15:17:17 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.590 15:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:58.590 15:17:17 -- common/autotest_common.sh@10 -- # set +x 00:31:58.590 15:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:58.590 15:17:17 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:58.590 15:17:17 -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:58.590 15:17:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:58.590 15:17:17 -- nvmf/common.sh@116 -- # sync 00:31:58.590 15:17:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:58.590 15:17:17 -- nvmf/common.sh@119 -- # set +e 00:31:58.590 15:17:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:58.590 15:17:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:58.590 rmmod nvme_tcp 00:31:58.590 rmmod nvme_fabrics 00:31:58.590 rmmod nvme_keyring 00:31:58.590 15:17:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:58.590 15:17:17 -- nvmf/common.sh@123 -- # set -e 00:31:58.590 15:17:17 -- nvmf/common.sh@124 -- # return 0 00:31:58.590 15:17:17 -- nvmf/common.sh@477 -- # '[' -n 3479275 ']' 00:31:58.590 15:17:17 -- nvmf/common.sh@478 -- # killprocess 3479275 00:31:58.590 15:17:17 -- common/autotest_common.sh@926 -- # '[' -z 3479275 ']' 00:31:58.590 15:17:17 -- common/autotest_common.sh@930 -- # kill -0 3479275 00:31:58.590 15:17:17 -- common/autotest_common.sh@931 -- # uname 00:31:58.590 15:17:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:58.590 15:17:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3479275 00:31:58.850 15:17:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:58.850 15:17:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:58.850 15:17:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3479275' 00:31:58.850 killing process with pid 3479275 00:31:58.850 15:17:17 -- common/autotest_common.sh@945 -- # kill 3479275 00:31:58.850 15:17:17 -- common/autotest_common.sh@950 -- # wait 3479275 00:31:59.110 15:17:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:59.110 15:17:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:59.110 15:17:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:59.110 15:17:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:59.110 15:17:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:59.110 15:17:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.110 15:17:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.110 15:17:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.016 15:17:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:01.016 00:32:01.016 real 0m27.213s 00:32:01.016 user 1m4.244s 00:32:01.016 sys 0m6.863s 00:32:01.016 15:17:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.016 15:17:19 -- common/autotest_common.sh@10 -- # set +x 00:32:01.016 ************************************ 00:32:01.016 END TEST nvmf_bdevperf 00:32:01.016 ************************************ 00:32:01.016 15:17:19 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:01.016 15:17:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:01.016 15:17:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:01.016 15:17:19 -- common/autotest_common.sh@10 -- # set +x 00:32:01.016 ************************************ 00:32:01.016 START TEST nvmf_target_disconnect 00:32:01.016 ************************************ 00:32:01.016 15:17:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:01.275 * Looking for test storage... 00:32:01.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:01.275 15:17:19 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.275 15:17:19 -- nvmf/common.sh@7 -- # uname -s 00:32:01.275 15:17:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.275 15:17:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.275 15:17:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.275 15:17:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.275 15:17:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.275 15:17:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.275 15:17:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.275 15:17:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.275 15:17:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.275 15:17:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.275 15:17:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:01.275 15:17:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:01.275 15:17:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.275 15:17:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.275 15:17:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.275 15:17:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.275 15:17:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.275 15:17:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.275 15:17:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.275 15:17:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.275 15:17:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.275 15:17:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.275 15:17:19 -- paths/export.sh@5 -- # export PATH 00:32:01.275 15:17:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.275 15:17:19 -- nvmf/common.sh@46 -- # : 0 00:32:01.275 15:17:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:01.275 15:17:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:01.275 15:17:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:01.275 15:17:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.275 15:17:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.275 15:17:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:01.275 15:17:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:01.275 15:17:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:01.275 15:17:19 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:01.275 15:17:19 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:01.275 15:17:19 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:01.275 15:17:19 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:32:01.275 15:17:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:01.275 15:17:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.275 15:17:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:01.275 15:17:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:01.275 15:17:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:01.275 15:17:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.275 15:17:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:01.275 15:17:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.275 15:17:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:01.275 15:17:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:01.275 15:17:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:01.275 15:17:19 -- common/autotest_common.sh@10 -- # set +x 00:32:07.847 15:17:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:07.847 15:17:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:07.847 15:17:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:07.847 15:17:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:07.847 15:17:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:07.847 15:17:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:07.848 15:17:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:07.848 15:17:26 -- nvmf/common.sh@294 -- # net_devs=() 00:32:07.848 15:17:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:07.848 15:17:26 -- nvmf/common.sh@295 -- # e810=() 00:32:07.848 15:17:26 -- nvmf/common.sh@295 -- # local -ga e810 00:32:07.848 15:17:26 -- nvmf/common.sh@296 -- # x722=() 00:32:07.848 15:17:26 -- nvmf/common.sh@296 -- # local -ga x722 00:32:07.848 15:17:26 -- nvmf/common.sh@297 -- # mlx=() 00:32:07.848 15:17:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:07.848 15:17:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.848 15:17:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:07.848 15:17:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:07.848 15:17:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:07.848 15:17:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:07.848 15:17:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:07.848 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:07.848 15:17:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:07.848 15:17:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:07.848 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:07.848 15:17:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:07.848 15:17:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:07.848 15:17:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.848 15:17:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:07.848 15:17:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.848 15:17:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:07.848 Found net devices under 0000:af:00.0: cvl_0_0 00:32:07.848 15:17:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.848 15:17:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:07.848 15:17:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.848 15:17:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:07.848 15:17:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.848 15:17:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:07.848 Found net devices under 0000:af:00.1: cvl_0_1 00:32:07.848 15:17:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.848 15:17:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:07.848 15:17:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:07.848 15:17:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:07.848 15:17:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.848 15:17:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.848 15:17:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.848 15:17:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:07.848 15:17:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.848 15:17:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.848 15:17:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:07.848 15:17:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.848 15:17:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.848 15:17:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:07.848 15:17:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:07.848 15:17:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.848 15:17:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.848 15:17:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.848 15:17:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.848 15:17:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:07.848 15:17:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.848 15:17:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.848 15:17:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.848 15:17:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:07.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:32:07.848 00:32:07.848 --- 10.0.0.2 ping statistics --- 00:32:07.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.848 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:32:07.848 15:17:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:32:07.848 00:32:07.848 --- 10.0.0.1 ping statistics --- 00:32:07.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.848 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:32:07.848 15:17:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.848 15:17:26 -- nvmf/common.sh@410 -- # return 0 00:32:07.848 15:17:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:07.848 15:17:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.848 15:17:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:07.848 15:17:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.848 15:17:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:07.848 15:17:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:07.848 15:17:26 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:07.848 15:17:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:07.848 15:17:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:07.848 15:17:26 -- common/autotest_common.sh@10 -- # set +x 00:32:07.848 ************************************ 00:32:07.848 START TEST nvmf_target_disconnect_tc1 00:32:07.848 ************************************ 00:32:07.848 15:17:26 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:32:07.848 15:17:26 -- host/target_disconnect.sh@32 -- # set +e 00:32:07.848 15:17:26 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:07.848 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.848 [2024-06-11 15:17:26.534789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.848 [2024-06-11 15:17:26.535242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.848 [2024-06-11 15:17:26.535284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f15a60 with addr=10.0.0.2, port=4420 00:32:07.848 [2024-06-11 15:17:26.535339] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:07.848 [2024-06-11 15:17:26.535365] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:07.848 [2024-06-11 15:17:26.535384] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:07.848 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:07.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:07.848 Initializing NVMe Controllers 00:32:07.848 15:17:26 -- host/target_disconnect.sh@33 -- # trap - ERR 00:32:07.848 15:17:26 -- host/target_disconnect.sh@33 -- # print_backtrace 00:32:07.848 15:17:26 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:32:07.848 15:17:26 -- common/autotest_common.sh@1132 -- # return 0 00:32:07.848 15:17:26 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:32:07.848 15:17:26 -- host/target_disconnect.sh@41 -- # set -e 00:32:07.848 00:32:07.848 real 0m0.122s 00:32:07.848 user 0m0.043s 00:32:07.848 sys 0m0.077s 00:32:07.848 15:17:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:07.848 15:17:26 -- common/autotest_common.sh@10 -- # set +x 00:32:07.848 ************************************ 00:32:07.848 END TEST nvmf_target_disconnect_tc1 00:32:07.848 ************************************ 00:32:07.848 15:17:26 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:07.848 15:17:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:07.848 15:17:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:07.848 15:17:26 -- common/autotest_common.sh@10 -- # set +x 00:32:07.848 ************************************ 00:32:07.848 START TEST nvmf_target_disconnect_tc2 00:32:07.848 ************************************ 00:32:07.848 15:17:26 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:32:07.848 15:17:26 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:32:07.848 15:17:26 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:07.849 15:17:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:07.849 15:17:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:07.849 15:17:26 -- common/autotest_common.sh@10 -- # set +x 00:32:07.849 15:17:26 -- nvmf/common.sh@469 -- # nvmfpid=3484945 00:32:07.849 15:17:26 -- nvmf/common.sh@470 -- # waitforlisten 3484945 00:32:07.849 15:17:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:07.849 15:17:26 -- common/autotest_common.sh@819 -- # '[' -z 3484945 ']' 00:32:07.849 15:17:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.849 15:17:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:07.849 15:17:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.849 15:17:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:07.849 15:17:26 -- common/autotest_common.sh@10 -- # set +x 00:32:07.849 [2024-06-11 15:17:26.644450] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:07.849 [2024-06-11 15:17:26.644506] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.849 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.107 [2024-06-11 15:17:26.737586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:08.107 [2024-06-11 15:17:26.826309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:08.107 [2024-06-11 15:17:26.826452] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.107 [2024-06-11 15:17:26.826464] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.107 [2024-06-11 15:17:26.826474] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.107 [2024-06-11 15:17:26.826537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:08.107 [2024-06-11 15:17:26.826648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:08.107 [2024-06-11 15:17:26.826761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:08.107 [2024-06-11 15:17:26.826761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:09.042 15:17:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:09.042 15:17:27 -- common/autotest_common.sh@852 -- # return 0 00:32:09.042 15:17:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:09.042 15:17:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:09.042 15:17:27 -- common/autotest_common.sh@10 -- # set +x 00:32:09.042 15:17:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.042 15:17:27 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:09.042 15:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.042 15:17:27 -- common/autotest_common.sh@10 -- # set +x 00:32:09.042 Malloc0 00:32:09.042 15:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.042 15:17:27 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:09.043 15:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.043 15:17:27 -- common/autotest_common.sh@10 -- # set +x 00:32:09.043 [2024-06-11 15:17:27.641901] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.043 15:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.043 15:17:27 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:09.043 15:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.043 15:17:27 -- common/autotest_common.sh@10 -- # set +x 00:32:09.043 15:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.043 15:17:27 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:09.043 15:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.043 15:17:27 -- common/autotest_common.sh@10 -- # set +x 00:32:09.043 15:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.043 15:17:27 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.043 15:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.043 15:17:27 -- common/autotest_common.sh@10 -- # set +x 00:32:09.043 [2024-06-11 15:17:27.670158] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.043 15:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.043 15:17:27 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:09.043 15:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.043 15:17:27 -- common/autotest_common.sh@10 -- # set +x 00:32:09.043 15:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.043 15:17:27 -- host/target_disconnect.sh@50 -- # reconnectpid=3485228 00:32:09.043 15:17:27 -- host/target_disconnect.sh@52 -- # sleep 2 00:32:09.043 15:17:27 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:09.043 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.948 15:17:29 -- host/target_disconnect.sh@53 -- # kill -9 3484945 00:32:10.948 15:17:29 -- host/target_disconnect.sh@55 -- # sleep 2 00:32:10.948 Read completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Read completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Read completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Read completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Read completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Read completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Read completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Read completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Read completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Read completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Write completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.948 Write completed with error (sct=0, sc=8) 00:32:10.948 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 [2024-06-11 15:17:29.699429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 [2024-06-11 15:17:29.699723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 [2024-06-11 15:17:29.699912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Write completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.949 Read completed with error (sct=0, sc=8) 00:32:10.949 starting I/O failed 00:32:10.950 Read completed with error (sct=0, sc=8) 00:32:10.950 starting I/O failed 00:32:10.950 Read completed with error (sct=0, sc=8) 00:32:10.950 starting I/O failed 00:32:10.950 Read completed with error (sct=0, sc=8) 00:32:10.950 starting I/O failed 00:32:10.950 Write completed with error (sct=0, sc=8) 00:32:10.950 starting I/O failed 00:32:10.950 Write completed with error (sct=0, sc=8) 00:32:10.950 starting I/O failed 00:32:10.950 Read completed with error (sct=0, sc=8) 00:32:10.950 starting I/O failed 00:32:10.950 Read completed with error (sct=0, sc=8) 00:32:10.950 starting I/O failed 00:32:10.950 Write completed with error (sct=0, sc=8) 00:32:10.950 starting I/O failed 00:32:10.950 [2024-06-11 15:17:29.700202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:10.950 [2024-06-11 15:17:29.700411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.700821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.700854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.701228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.701574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.701603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.701962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.702327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.702358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.702707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.703064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.703094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.703416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.703704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.703734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.704090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.704437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.704466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.704815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.705127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.705152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.705491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.705928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.705958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.706325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.706576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.706605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.706981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.707301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.707331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.707668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.708044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.708075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.708395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.708676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.708706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.709109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.709451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.709480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.709851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.710217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.710247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.710545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.710790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.710819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.711163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.711534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.711563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.711879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.712231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.712257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.712615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.712919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.712944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.713170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.713473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.713502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.713799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.714171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.714201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.714496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.714854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.714883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.715269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.715554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.715583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.950 qpair failed and we were unable to recover it. 00:32:10.950 [2024-06-11 15:17:29.715904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.950 [2024-06-11 15:17:29.716188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.716218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.716531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.716746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.716775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.717083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.717374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.717403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.717696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.718041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.718072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.718426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.718637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.718666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.718966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.719243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.719272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.719579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.719922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.719952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.720230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.720519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.720548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.720924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.721267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.721297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.721538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.721819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.721849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.722072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.722358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.722388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.722753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.722974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.723004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.723402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.723689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.723718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.724002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.724296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.724327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.724551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.724854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.724884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.725249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.725587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.725623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.725925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.726229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.726260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.726481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.726856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.726885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.727200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.727561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.727591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.727882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.728217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.728246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.728538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.728820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.728849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.729226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.729452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.729482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.729810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.730095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.730124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.730357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.730639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.730668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.731009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.731319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.731349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.731569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.731807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.731837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.951 qpair failed and we were unable to recover it. 00:32:10.951 [2024-06-11 15:17:29.732202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.951 [2024-06-11 15:17:29.732428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.732457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.732771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.733069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.733099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.733450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.733787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.733816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.734048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.734421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.734450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.734729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.735124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.735154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.735460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.735825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.735855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.736072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.736345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.736375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.736575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.736849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.736878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.737188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.737422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.737451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.737817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.738102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.738133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.738475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.738741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.738770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.738983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.739269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.739299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.739682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.739987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.740016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.740338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.740621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.740650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.740854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.741152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.741182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.741552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.741770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.741798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.742117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.742404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.742433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.742714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.743098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.743128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.743507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.743783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.743812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.744128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.744491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.744520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.744860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.745253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.745283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.745630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.745966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.745995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.746292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.746697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.746726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.747116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.747332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.747361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.747588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.747948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.747977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.748367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.748730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.748759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.749135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.749422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.749451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.749800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.750085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.952 [2024-06-11 15:17:29.750115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.952 qpair failed and we were unable to recover it. 00:32:10.952 [2024-06-11 15:17:29.750481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.750876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.750906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.751193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.751536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.751566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.751851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.752122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.752153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.752465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.752805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.752834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.753121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.753403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.753432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.753774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.754051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.754081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.754453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.754811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.754841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.755122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.755399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.755428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.755648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.755857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.755886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.756137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.756421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.756450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.756751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.757173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.757203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.757623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.757928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.757958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.758317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.758687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.758716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.759083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.759414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.759443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.759781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.760123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.760154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.760440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.760810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.760839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.761258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.761595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.761625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.761925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.762151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.762181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.762520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.762784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.762814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.763095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.763463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.763494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.763828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.764061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.764092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.764377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.764739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.764768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.953 qpair failed and we were unable to recover it. 00:32:10.953 [2024-06-11 15:17:29.765131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.953 [2024-06-11 15:17:29.765495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.765524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.765893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.766252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.766282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.766628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.766970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.767000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.767359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.767700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.767729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.768002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.768304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.768333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.768738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.769043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.769074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.769419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.769771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.769800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.770148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.770383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.770412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.770720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.770938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.770966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.771281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.771655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.771684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.771956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.772278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.772309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.772624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.772913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.772941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.773231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.773545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.773574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.773970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.774346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.774376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.774662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.775058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.775088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.775391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.775602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.775631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.775999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.776389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.776420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.776757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.777154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.777185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.777550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.777940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.777969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.778371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.778734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.778763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.779131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.779413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.779442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.779785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.780071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.780102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.780443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.780808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.780843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.781187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.781539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.781568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.781965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.782273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.782303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.782657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.783034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.783064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.783345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.783682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.783711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.784079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.784418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.784447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.784839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.785124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.954 [2024-06-11 15:17:29.785154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.954 qpair failed and we were unable to recover it. 00:32:10.954 [2024-06-11 15:17:29.785436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.955 [2024-06-11 15:17:29.785772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.955 [2024-06-11 15:17:29.785801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.955 qpair failed and we were unable to recover it. 00:32:10.955 [2024-06-11 15:17:29.786164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.955 [2024-06-11 15:17:29.786437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.955 [2024-06-11 15:17:29.786467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:10.955 qpair failed and we were unable to recover it. 00:32:10.955 [2024-06-11 15:17:29.786678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.787012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.787050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.787341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.787621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.787656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.787998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.788308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.788339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.788706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.789080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.789111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.789484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.789839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.789868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.790222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.790510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.790538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.790817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.791154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.791184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.791531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.791910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.791939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.792310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.792586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.792615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.792841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.793194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.793225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.793518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.793912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.793941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.794313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.794654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.794688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.794981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.795372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.795402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.795768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.796139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.796194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.796505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.796916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.796946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.797268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.797551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.797580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.797920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.798285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.798316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.798654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.799019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.799059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.799355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.799746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.799775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.800131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.800482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.800511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.231 qpair failed and we were unable to recover it. 00:32:11.231 [2024-06-11 15:17:29.800786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.231 [2024-06-11 15:17:29.801161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.801191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.801584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.801919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.801954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.802267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.802553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.802582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.802930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.803235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.803266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.803544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.803850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.803880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.804248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.804528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.804557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.804845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.805124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.805154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.805517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.805799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.805829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.806060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.806403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.806432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.806792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.807070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.807101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.807390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.807745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.807775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.808080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.808407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.808437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.808817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.809185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.809215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.809426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.809733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.809762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.810053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.810388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.810418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.810695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.811060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.811091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.811326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.811593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.811622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.812019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.812334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.812365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.812708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.813087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.813118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.813492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.813829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.813859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.814228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.814444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.814474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.814833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.815201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.815231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.815582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.815887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.815916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.816221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.816559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.816589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.816877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.817243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.817274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.817700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.818094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.232 [2024-06-11 15:17:29.818125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.232 qpair failed and we were unable to recover it. 00:32:11.232 [2024-06-11 15:17:29.818442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.818826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.818857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.819202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.819489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.819519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.819864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.820225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.820256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.820603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.820970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.821000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.821390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.821758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.821787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.822169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.822516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.822545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.822787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.823157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.823187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.823557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.823898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.823927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.824312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.824681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.824710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.825110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.825397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.825429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.825777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.826066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.826096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.826397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.826738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.826768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.827112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.827475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.827505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.827868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.828267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.828297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.828553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.828919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.828950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.829281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.829685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.829715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.830020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.830305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.830336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.830634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.830988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.831018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.831388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.831679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.831710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.831944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.832314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.832345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.832681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.833052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.833083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.833324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.833615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.833645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.834020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.834256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.834287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.834666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.834950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.834980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.835382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.835606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.835636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.836048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.836343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.836373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.836764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.837136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.837167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.233 qpair failed and we were unable to recover it. 00:32:11.233 [2024-06-11 15:17:29.837469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.837757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.233 [2024-06-11 15:17:29.837787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.838164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.838408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.838437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.838767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.839171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.839202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.839467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.839697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.839726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.840127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.840505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.840534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.840880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.841267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.841297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.841581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.841865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.841895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.842257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.842603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.842633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.843009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.843400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.843430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.843749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.844021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.844063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.844386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.844685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.844714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.845115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.845404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.845433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.845666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.846070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.846101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.846434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.846779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.846808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.847129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.847348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.847378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.847679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.847953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.847983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.848312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.848638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.848669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.848956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.849253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.849284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.849682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.850063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.850094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.850423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.850779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.850808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.851184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.851503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.851532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.851846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.852221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.852252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.852555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.852955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.852985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.853205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.853550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.853579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.853962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.854226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.854257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.854495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.854788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.854818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.855064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.855442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.855494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.855797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.856036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.856067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.234 qpair failed and we were unable to recover it. 00:32:11.234 [2024-06-11 15:17:29.856401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.234 [2024-06-11 15:17:29.856698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.856730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.857092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.857436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.857467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.857715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.858038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.858069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.858454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.858760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.858790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.859021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.859407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.859439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.859768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.860063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.860096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.860507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.860884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.860914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.861325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.861610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.861641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.861996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.862332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.862365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.862670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.863099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.863132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.863462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.863748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.863778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.864135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.864518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.864549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.864776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.865173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.865204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.865514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.865857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.865888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.866278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.866598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.866628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.866934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.867354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.867385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.867741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.868092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.868123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.868497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.868872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.868903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.869193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.869428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.869458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.869712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.870088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.870120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.870419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.870650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.870679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.871067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.871306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.871336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.871593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.871919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.871949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.872333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.872569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.872601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.872991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.873351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.873383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.235 qpair failed and we were unable to recover it. 00:32:11.235 [2024-06-11 15:17:29.873610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.235 [2024-06-11 15:17:29.874041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.874073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.874390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.874809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.874839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.875226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.875605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.875635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.875873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.876297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.876328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.876611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.876938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.876969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.877309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.877589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.877619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.877915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.878362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.878396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.878628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.878975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.879004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.879246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.879598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.879629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.879968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.880256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.880288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.880607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.880955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.880985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.881314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.881678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.881709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.882006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.882322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.882356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.882743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.883132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.883165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.883487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.883728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.883759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.884000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.884235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.884265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.884594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.884956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.884986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.885384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.885621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.885651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.886021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.886283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.886313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.886695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.886992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.236 [2024-06-11 15:17:29.887021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.236 qpair failed and we were unable to recover it. 00:32:11.236 [2024-06-11 15:17:29.887427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.887755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.887785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.888196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.888495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.888525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.888775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.889100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.889133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.889515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.889842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.889872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.890262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.890556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.890586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.890882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.891193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.891223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.891584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.891878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.891913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.892200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.892576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.892606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.892989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.893302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.893334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.893639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.894036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.894067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.894387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.894691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.894721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.895046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.895398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.895428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.895804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.896155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.896187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.896487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.896765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.896795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.897181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.897537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.897567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.897979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.898347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.898378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.898721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.899087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.899124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.899520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.899832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.899864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.900195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.900519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.900550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.900784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.901066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.901098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.901516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.901833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.901863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.902250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.902585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.902616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.902917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.903133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.903164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.903466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.903696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.903726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.904065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.904275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.904305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.237 [2024-06-11 15:17:29.904643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.904948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.237 [2024-06-11 15:17:29.904978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.237 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.905308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.905534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.905569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.905867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.906276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.906307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.906612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.906975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.907007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.907379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.907686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.907716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.908132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.908451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.908482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.908870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.909192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.909224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.909531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.909835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.909865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.910284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.910532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.910563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.910816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.911146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.911177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.911433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.911663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.911693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.911994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.912249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.912286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.912677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.912997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.913052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.913353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.913675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.913705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.914114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.914418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.914449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.914837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.915189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.915220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.915584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.915951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.915981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.916359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.916713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.916743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.917106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.917437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.917468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.917785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.918015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.918058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.918418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.918720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.918750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.919109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.919459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.919490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.919870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.920199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.920231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.920535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.920841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.920871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.238 qpair failed and we were unable to recover it. 00:32:11.238 [2024-06-11 15:17:29.921180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.238 [2024-06-11 15:17:29.921482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.921513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.921839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.922166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.922197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.922510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.922858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.922888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.923281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.923660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.923691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.923949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.924238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.924269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.924574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.925001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.925040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.925346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.925644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.925674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.926144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.926471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.926500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.926900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.927172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.927204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.927519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.927758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.927790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.928111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.928411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.928441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.928849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.929181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.929213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.929531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.929811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.929841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.930207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.930567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.930598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.930997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.931352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.931385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.931696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.931987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.932017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.932358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.932607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.932638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.933094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.933475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.933506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.933754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.934064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.934096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.934425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.934675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.934705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.935116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.935405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.935435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.935809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.936045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.936077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.936464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.936794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.936823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.937213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.937510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.937541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.239 qpair failed and we were unable to recover it. 00:32:11.239 [2024-06-11 15:17:29.937889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.239 [2024-06-11 15:17:29.938279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.938310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.938613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.939014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.939058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.939313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.939533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.939563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.939955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.940264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.940295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.940598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.940916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.940946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.941356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.941712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.941742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.942114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.942491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.942522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.942987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.943248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.943280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.943657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.944047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.944078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.944403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.944651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.944683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.944936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.945228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.945260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.945520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.945845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.945875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.946205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.946508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.946538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.946785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.947168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.947200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.947519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.947755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.947786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.948182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.948545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.948575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.948935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.949323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.949355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.949656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.950112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.950144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.950451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.950778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.950809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.951125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.951432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.951463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.951789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.952090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.952121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.952377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.952630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.952660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.952906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.953174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.953206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.953540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.953846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.953877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.954115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.954360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.954390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.240 qpair failed and we were unable to recover it. 00:32:11.240 [2024-06-11 15:17:29.954709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.240 [2024-06-11 15:17:29.955012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.955052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.955352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.955572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.955602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.957498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.957883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.957917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.958243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.958603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.958634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.958946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.959202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.959233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.959491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.959738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.959768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.960141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.960499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.960530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.960785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.961151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.961184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.961430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.961723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.961753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.962109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.962342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.962372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.962631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.962953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.962983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.963337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.963637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.963667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.963970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.964407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.964439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.964691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.965051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.965083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.965442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.965689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.965721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.966021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.966268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.966299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.966717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.967050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.967083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.967470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.967754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.967785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.968038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.968342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.241 [2024-06-11 15:17:29.968374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.241 qpair failed and we were unable to recover it. 00:32:11.241 [2024-06-11 15:17:29.968681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.969007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.969068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.969385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.969721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.969751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.970063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.970359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.970389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.970651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.970945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.970976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.971243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.971633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.971663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.972018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.972429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.972461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.972768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.973126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.973159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.973413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.975080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.975136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.975569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.975879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.975910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.976217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.976572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.976604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.976907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.977267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.977301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.977625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.979290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.979346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.979752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.980064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.980097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.980338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.980642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.980673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.980965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.981266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.981297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.981672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.981953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.981984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.982313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.982614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.982646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.982949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.983273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.983305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.983662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.983980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.984011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.984391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.984632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.984663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.984966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.985275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.985308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.985627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.986014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.986061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.986504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.986896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.986926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.987242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.987624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.987655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.987903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.988327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.988359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.242 [2024-06-11 15:17:29.988668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.988965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.242 [2024-06-11 15:17:29.988995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.242 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.989410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.989721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.989751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.990112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.990408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.990438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.990825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.991112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.991145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.991455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.991812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.991843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.992143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.992384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.992414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.992671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.992982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.993013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.993357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.995014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.995088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.995373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.995677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.995708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.996124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.996475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.996506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.996837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.997218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.997250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.997508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.997797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.997828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.998140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.998439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.998470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.998833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.999164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.999196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:29.999456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.999706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:29.999736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.000088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.000385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.000424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.000738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.000957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.000988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.001393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.001644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.001675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.001977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.002282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.002315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.002702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.002939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.002969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.003332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.003578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.003608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.003920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.004239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.004286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.004606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.004988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.005020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.005269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.005576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.005608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.006001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.006304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.006336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.006629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.006951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.006989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.007398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.007622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.007653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.243 qpair failed and we were unable to recover it. 00:32:11.243 [2024-06-11 15:17:30.007904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.008274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.243 [2024-06-11 15:17:30.008305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.008639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.008925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.008956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.009346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.009587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.009618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.010062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.010356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.010388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.010680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.010911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.010942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.011281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.011533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.011563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.011934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.012190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.012222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.012475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.012819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.012849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.013164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.013553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.013705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.014187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.014725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.014875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.015238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.015788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.015824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.016136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.016382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.016415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.016751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.017075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.017110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.017459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.017698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.017730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.018156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.018471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.018504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.018833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.019087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.019120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.019540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.019925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.019957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.020231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.020464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.020496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.020855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.021183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.021221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.021447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.021807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.021839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.022167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.022473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.022502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.022843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.023177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.023208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.023562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.023775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.023807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.024116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.024372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.024403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.024687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.024930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.024962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.025296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.025632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.025663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.025964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.026184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.026216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.244 qpair failed and we were unable to recover it. 00:32:11.244 [2024-06-11 15:17:30.026548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.026855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.244 [2024-06-11 15:17:30.026886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.027132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.027363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.027395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.027792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.028115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.028147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.028496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.028766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.028797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.029182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.030876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.030931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.031290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.031628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.031658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.032054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.032410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.032441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.032867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.033180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.033212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.033569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.033888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.033920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.034214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.034564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.034596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.034894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.035179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.035212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.035510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.035863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.035894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.036168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.036416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.036447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.036847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.037204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.037238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.037550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.037875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.037906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.038208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.038564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.038595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.038954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.039286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.039316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.039640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.039869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.039899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.040192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.040502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.040532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.040848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.041170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.041202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.041600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.041891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.041922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.042221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.042563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.042593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.042828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.043151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.043185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.043500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.043715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.043744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.044086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.044443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.044473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.044785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.045197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.045228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.245 qpair failed and we were unable to recover it. 00:32:11.245 [2024-06-11 15:17:30.045516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.045827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.245 [2024-06-11 15:17:30.045858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.046286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.046674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.046705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.047041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.047339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.047370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.047602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.047919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.047949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.048336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.048566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.048596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.048833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.049063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.049094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.049352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.049652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.049682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.049974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.050301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.050334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.050710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.051054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.051086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.051381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.051720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.051750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.052044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.052344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.052374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.052675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.052843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.052874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.053158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.053385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.053415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.053769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.054085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.054117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.054426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.054760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.054790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.055089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.055365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.055395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.055668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.056017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.056057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.056290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.056517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.056548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.056901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.057303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.057334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.246 [2024-06-11 15:17:30.057634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.058044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.246 [2024-06-11 15:17:30.058077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.246 qpair failed and we were unable to recover it. 00:32:11.543 [2024-06-11 15:17:30.058372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.543 [2024-06-11 15:17:30.058686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.058717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.059022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.059395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.059426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.059729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.059957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.059987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.060323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.060545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.060575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.060820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.061114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.061145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.061426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.061771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.061801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.062078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.062425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.062455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.062746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.062957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.062987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.063307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.063542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.063573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.063794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.064161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.064192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.064424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.064760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.064790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.065087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.065383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.065413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.065699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.065981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.066012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.066401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.066629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.066659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.067081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.067354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.067384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.067613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.067905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.067935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.068247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.068596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.068626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.068986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.069355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.069387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.069764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.070169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.070200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.070434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.070846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.070877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.071195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.071521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.071550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.071935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.072306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.072337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.072638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.073053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.073084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.073380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.073742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.073772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.074149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.074423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.074453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.074872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.075257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.075287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.075538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.075867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.075897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.076173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.076471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.076501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.544 qpair failed and we were unable to recover it. 00:32:11.544 [2024-06-11 15:17:30.076857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.544 [2024-06-11 15:17:30.077131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.077162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.077512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.077807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.077838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.078136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.078449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.078480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.078725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.079012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.079061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.079363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.079637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.079667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.080063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.080434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.080464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.080707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.081077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.081108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.081486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.081830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.081861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.082206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.082529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.082560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.082886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.083201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.083241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.083562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.083885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.083924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.084171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.084448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.084489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.084856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.085167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.085220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.085630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.085970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.086006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.086316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.086670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.086717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.087195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.087559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.087596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.087879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.088154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.088185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.088565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.088983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.089014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.089423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.089729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.089760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.090097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.090420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.090449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.090814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.091178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.091209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.091556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.091957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.091987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.092355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.092587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.092616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.092944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.093267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.093298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.093600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.093902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.093932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.094170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.094468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.094499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.094807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.095096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.095128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.095491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.095836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.095865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.096106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.096413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.096444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.545 qpair failed and we were unable to recover it. 00:32:11.545 [2024-06-11 15:17:30.096691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.545 [2024-06-11 15:17:30.097000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.097038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.097412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.097704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.097734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.098044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.098275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.098305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.098707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.099074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.099105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.099390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.099741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.099770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.100139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.100414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.100444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.100749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.101044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.101074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.101374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.101769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.101799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.102189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.102534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.102564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.102878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.103195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.103227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.103526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.103798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.103829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.104125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.104404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.104434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.104804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.105018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.105057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.105289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.105631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.105662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.105959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.106264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.106294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.106649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.106880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.106910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.107217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.107601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.107631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.107917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.108286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.108316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.108599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.108818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.108847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.109084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.109369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.109404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.109752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.110064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.110095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.110390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.110680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.110711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.110995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.111314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.111346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.111575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.111920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.111949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.112331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.112627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.112656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.112970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.113260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.113291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.113668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.113952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.113981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.114285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.114624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.114654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.114890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.115261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.546 [2024-06-11 15:17:30.115292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.546 qpair failed and we were unable to recover it. 00:32:11.546 [2024-06-11 15:17:30.115600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.115814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.115849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.116055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.116283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.116313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.116461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.116740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.116770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.116995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.117224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.117255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.117563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.117853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.117884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.118183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.118472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.118501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.118720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.119019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.119068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.119429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.119726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.119766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.121302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.121685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.121719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.122097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.122340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.122370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.122685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.122919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.122957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.123182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.123504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.123534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.123756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.124048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.124080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.124226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.124598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.124628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.124901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.125122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.125153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.125450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.125754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.125784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.126115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.126391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.126420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.126712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.127064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.127095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.127330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.127671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.127700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.128063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.128385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.128414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.128710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.129067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.129103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.129449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.129783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.129813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.130184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.130473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.130502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.130815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.131187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.131218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.131537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.131906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.131936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.132281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.132640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.132669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.132960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.133177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.133207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.133448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.133729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.547 [2024-06-11 15:17:30.133759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.547 qpair failed and we were unable to recover it. 00:32:11.547 [2024-06-11 15:17:30.134131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.134413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.134442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.134675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.135083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.135117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.135437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.135833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.135864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.136217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.136566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.136596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.136906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.137209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.137239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.137538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.137928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.137957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.138246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.138532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.138563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.138867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.139262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.139293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.139591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.139894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.139924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.140241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.140514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.140544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.140932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.141258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.141289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.141529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.141938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.141967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.142371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.142733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.142764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.143113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.143401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.143432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.143678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.143977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.144007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.144250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.144476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.144507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.144823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.145142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.145173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.145458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.145684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.145715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.146094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.146327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.146357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.146651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.147043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.147074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.147387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.147618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.147649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.148018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.148322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.148353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.148668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.149009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.548 [2024-06-11 15:17:30.149050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.548 qpair failed and we were unable to recover it. 00:32:11.548 [2024-06-11 15:17:30.149358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.149688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.149718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.150004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.150396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.150428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.150656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.151069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.151101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.151403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.151707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.151738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.152054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.152354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.152383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.152692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.152989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.153018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.153286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.153681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.153710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.154000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.154372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.154403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.154769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.155064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.155095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.155420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.155702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.155732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.156023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.156300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.156329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.156683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.157035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.157066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.157454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.157845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.157875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.158228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.158514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.158544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.158773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.159067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.159098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.159502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.159788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.159817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.160124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.160509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.160539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.160890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.161135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.161165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.161460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.161773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.161802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.162070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.162461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.162492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.162887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.163254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.163285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.163659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.164036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.164067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.164312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.164605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.164635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.165039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.165266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.165297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.165578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.165793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.165822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.166137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.168273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.168334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.168734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.169134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.169167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.169518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.169809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.169839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.549 [2024-06-11 15:17:30.170126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.170406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.549 [2024-06-11 15:17:30.170436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.549 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.170713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.171092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.171123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.171358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.171671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.171701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.172083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.172332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.172362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.172718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.173064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.173096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.173498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.173803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.173833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.174243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.174475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.174505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.174867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.175214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.175245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.175645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.176007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.176048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.176373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.176674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.176704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.176993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.177364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.177395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.177695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.177986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.178016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.178325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.178639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.178669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.179074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.179319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.179349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.179590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.179953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.179982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.180320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.180640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.180671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.181019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.181380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.181411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.181661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.182040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.182072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.182385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.182683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.182713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.183046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.183389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.183419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.183749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.184093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.184125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.184369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.184801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.184831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.185215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.185589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.185619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.185970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.186370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.186402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.186707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.186942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.186972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.187298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.187534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.187564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.187887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.188286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.188318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.188557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.188898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.188929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.189222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.189578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.189609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.550 qpair failed and we were unable to recover it. 00:32:11.550 [2024-06-11 15:17:30.189984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.190282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.550 [2024-06-11 15:17:30.190315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.190572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.190890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.190921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.191210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.191456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.191487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.191826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.192201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.192234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.192457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.192741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.192772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.193054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.193412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.193443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.193746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.194045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.194076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.194330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.194643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.194674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.195086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.195343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.195374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.195776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.196052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.196084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.196389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.196687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.196717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.197097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.197420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.197451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.197812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.198061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.198093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.198377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.198592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.198623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.199035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.199327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.199357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.199582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.199813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.199844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.200236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.200589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.200620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.200931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.201242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.201274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.201568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.201951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.201981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.202290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.202621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.202650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.202949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.203179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.203210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.203443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.203672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.203701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.204015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.204304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.204335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.204648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.205005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.205048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.205361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.205645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.205675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.206038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.206262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.206292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.206654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.207039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.207070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.207423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.207817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.207848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.208205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.208581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.208612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.551 qpair failed and we were unable to recover it. 00:32:11.551 [2024-06-11 15:17:30.208899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.209243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.551 [2024-06-11 15:17:30.209275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.209572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.209866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.209896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.210255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.210607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.210638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.210929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.211264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.211295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.211600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.211959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.211990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.212409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.212760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.212790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.213018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.213427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.213458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.213755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.214048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.214080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.214398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.214740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.214771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.215011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.215341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.215372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.215755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.216075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.216107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.216495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.216791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.216821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.217111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.217398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.217428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.217866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.218169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.218200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.218584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.218961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.218996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.219410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.219735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.219766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.220151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.220498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.220529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.220894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.221184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.221215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.221650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.222019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.222061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.222299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.222646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.222676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.223099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.223497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.223527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.223883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.224119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.224151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.224470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.224758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.224788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.225078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.225400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.225430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.225795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.226180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.226218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.226607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.226986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.227017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.227406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.227787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.227818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.228087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.228316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.228346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.228653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.228884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.228914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.552 qpair failed and we were unable to recover it. 00:32:11.552 [2024-06-11 15:17:30.229213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.229572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.552 [2024-06-11 15:17:30.229604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.229930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.230168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.230199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.230531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.230845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.230875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.231171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.231541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.231572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.231930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.232228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.232260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.232498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.232880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.232916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.233227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.233433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.233463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.233782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.234013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.234053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.234415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.234651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.234681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.235000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.235331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.235363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.235656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.235947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.235977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.236292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.236674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.236704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.237048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.237344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.237374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.237701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.238049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.238081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.238399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.238628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.238658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.238954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.239306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.239343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.239636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.240013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.240055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.240355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.240652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.240682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.241011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.241279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.241309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.241711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.242063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.242095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.242397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.242704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.242735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.242909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.243189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.243221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.243609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.243908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.243938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.244248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.244481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.244511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.244824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.245198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.553 [2024-06-11 15:17:30.245229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.553 qpair failed and we were unable to recover it. 00:32:11.553 [2024-06-11 15:17:30.245464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.245757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.245788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.246100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.246387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.246417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.246735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.247090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.247122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.247371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.247666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.247696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.247995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.248239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.248271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.248576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.248929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.248959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.249274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.249569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.249599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.249891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.250275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.250307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.250628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.250945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.250975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.251275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.251518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.251548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.251936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.252303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.252334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.252626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.252912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.252943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.253223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.253573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.253604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.253997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.254382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.254413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.254588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.254935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.254965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.255354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.255653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.255683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.255981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.256321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.256353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.256634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.256915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.256945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.257166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.257448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.257478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.257761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.258052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.258083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.258384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.258687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.258717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.259105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.259382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.259412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.259731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.260047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.260078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.260461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.260770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.260801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.261125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.261533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.261563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.261807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.262119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.262170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.262520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.262886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.262916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.263166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.263396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.263427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.554 [2024-06-11 15:17:30.263777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.263995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.554 [2024-06-11 15:17:30.264035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.554 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.264388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.264624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.264654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.264965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.265245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.265276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.265505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.265782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.265812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.266043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.266393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.266423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.266744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.266962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.266992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.267357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.267600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.267630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.267945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.268177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.268209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.268585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.268809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.268839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.269214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.269586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.269616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.269897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.270191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.270223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.270530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.270827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.270857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.271149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.271525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.271555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.271849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.272193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.272225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.272648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.272949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.272978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.273285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.273507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.273536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.273778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.274122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.274154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.274364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.274735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.274764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.275130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.275427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.275457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.275760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.276117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.276148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.276437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.276735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.276765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.277059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.277344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.277374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.277616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.277961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.277991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.278299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.278608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.278638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.278940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.279219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.279250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.279541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.279893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.279922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.280178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.280468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.280498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.280787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.281143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.281174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.281395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.281679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.281709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.555 qpair failed and we were unable to recover it. 00:32:11.555 [2024-06-11 15:17:30.282060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.282341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.555 [2024-06-11 15:17:30.282370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.282661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.283040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.283071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.283439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.283670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.283700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.283996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.284303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.284334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.284577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.284944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.284974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.285269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.285479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.285509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.285789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.286085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.286117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.286490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.286725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.286755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.287134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.287364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.287394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.287745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.287965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.287994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.288316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.288617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.288647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.288940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.289282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.289313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.289699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.289925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.289954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.290252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.290554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.290584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.290939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.291298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.291328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.291742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.292117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.292147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.292519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.292797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.292826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.293127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.293472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.293501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.293790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.294055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.294085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.294429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.294670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.294700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.294927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.295210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.295241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.295602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.295972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.296002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.296235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.296546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.296576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.296859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.297163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.297193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.297481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.297798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.297828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.298204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.298444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.298474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.298775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.299087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.299117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.299396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.299679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.299709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.300057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.300426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.300456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.556 qpair failed and we were unable to recover it. 00:32:11.556 [2024-06-11 15:17:30.300734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.301013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.556 [2024-06-11 15:17:30.301052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.301374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.301652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.301682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.302021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.302309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.302340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.302717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.302933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.302982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.303242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.303530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.303559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.303787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.304069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.304101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.304454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.304796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.304826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.305140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.305504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.305533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.305908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.306157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.306189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.306490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.306832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.306862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.307142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.307356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.307386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.307696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.308066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.308097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.308445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.308679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.308708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.309082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.309425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.309455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.309753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.310045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.310076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.310434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.310809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.310839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.311135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.311505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.311535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.311819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.312094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.312124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.312356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.312721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.312750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.312979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.313197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.313227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.313578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.313920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.313949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.314267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.314555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.314585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.314932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.315297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.315327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.315564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.315928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.315957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.316324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.316594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.316624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.557 qpair failed and we were unable to recover it. 00:32:11.557 [2024-06-11 15:17:30.316911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.557 [2024-06-11 15:17:30.317251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.317287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.317632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.317853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.317883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.318113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.318387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.318416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.318710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.319018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.319059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.319295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.319502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.319532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.319822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.320159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.320189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.320414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.320618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.320647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.320878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.321220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.321263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.321506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.321864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.321894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.322145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.322350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.322379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.322709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.322999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.323043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.323348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.323627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.323657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.324000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.324310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.324341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.324613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.324892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.324921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.325174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.325516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.325546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.325756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.326112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.326143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.326436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.326721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.326750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.327043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.327383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.327412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.327757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.328042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.328072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.328386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.328661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.328691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.329041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.329312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.329346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.329694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.330066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.330097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.330445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.330665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.330695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.330932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.331282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.331312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.331683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.331996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.332051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.332448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.332673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.332703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.332941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.333297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.333328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.333618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.333886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.333916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.558 qpair failed and we were unable to recover it. 00:32:11.558 [2024-06-11 15:17:30.334198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.558 [2024-06-11 15:17:30.334505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.334534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.334914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.335213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.335244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.335588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.335869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.335904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.336194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.336562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.336592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.336939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.337213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.337244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.337456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.337808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.337838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.338160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.338373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.338402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.338686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.339038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.339068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.339355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.339720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.339750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.340116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.340328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.340358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.340706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.340988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.341018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.341263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.341632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.341662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.342057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.342423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.342453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.342820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.343226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.343257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.343615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.343921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.343951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.344249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.344559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.344588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.344956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.345259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.345289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.345599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.345871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.345901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.346225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.346530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.346558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.346900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.347202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.347232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.347597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.347933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.347963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.348255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.348551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.348580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.348800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.349164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.349194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.349599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.349810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.349840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.350142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.350451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.350480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.350819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.351123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.351156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.351430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.351663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.351693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.351980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.352417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.352448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.352735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.353019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.353058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.559 qpair failed and we were unable to recover it. 00:32:11.559 [2024-06-11 15:17:30.353369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.559 [2024-06-11 15:17:30.353703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.353732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.353965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.354241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.354273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.354558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.354894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.354924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.355230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.355516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.355545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.355783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.356066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.356097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.356410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.356721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.356751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.357042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.357329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.357358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.357670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.358003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.358045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.358322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.358545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.358575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.358888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.359274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.359305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.359618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.360005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.360056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.360334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.360671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.360701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.361105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.361340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.361369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.361765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.361981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.362010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.362303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.362668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.362697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.362987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.363291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.363321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.363662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.364017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.364058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.364361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.364640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.364670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.364963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.365176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.365207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.365529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.365744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.365774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.366117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.366490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.366519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.366876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.367174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.367204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.367517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.367840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.367870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.368250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.368469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.368498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.368780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.369050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.369081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.369366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.369586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.369615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.369898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.370205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.370235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.370612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.370917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.370947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.371176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.371448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.371477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.560 qpair failed and we were unable to recover it. 00:32:11.560 [2024-06-11 15:17:30.371754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.560 [2024-06-11 15:17:30.372108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.561 [2024-06-11 15:17:30.372138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.561 qpair failed and we were unable to recover it. 00:32:11.561 [2024-06-11 15:17:30.372459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.561 [2024-06-11 15:17:30.372796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.561 [2024-06-11 15:17:30.372826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.561 qpair failed and we were unable to recover it. 00:32:11.561 [2024-06-11 15:17:30.373119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.561 [2024-06-11 15:17:30.373470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.561 [2024-06-11 15:17:30.373499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.561 qpair failed and we were unable to recover it. 00:32:11.561 [2024-06-11 15:17:30.373842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.561 [2024-06-11 15:17:30.374054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.561 [2024-06-11 15:17:30.374084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.561 qpair failed and we were unable to recover it. 00:32:11.829 [2024-06-11 15:17:30.374383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.829 [2024-06-11 15:17:30.374667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.829 [2024-06-11 15:17:30.374698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.829 qpair failed and we were unable to recover it. 00:32:11.829 [2024-06-11 15:17:30.374927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.829 [2024-06-11 15:17:30.375232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.829 [2024-06-11 15:17:30.375264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.829 qpair failed and we were unable to recover it. 00:32:11.829 [2024-06-11 15:17:30.375548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.829 [2024-06-11 15:17:30.375822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.829 [2024-06-11 15:17:30.375853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.829 qpair failed and we were unable to recover it. 00:32:11.829 [2024-06-11 15:17:30.376152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.829 [2024-06-11 15:17:30.376509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.829 [2024-06-11 15:17:30.376538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.829 qpair failed and we were unable to recover it. 00:32:11.829 [2024-06-11 15:17:30.376817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.377173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.377203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.377436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.377732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.377762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.378047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.378401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.378430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.378713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.379037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.379068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.379435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.379774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.379804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.380038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.380322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.380352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.380651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.380987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.381015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.381331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.381548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.381577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.381862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.382141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.382172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.382394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.382604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.382633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.382921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.383139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.383170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.383471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.383823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.383852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.384139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.384414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.384444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.384719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.384992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.385021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.385256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.385539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.385568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.385778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.386122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.386152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.386440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.386727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.386756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.387052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.387356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.387386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.387669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.387890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.387919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.388234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.388443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.388472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.388689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.388960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.388990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.389229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.389515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.389545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.389844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.390071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.390103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.390320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.390545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.390575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.390852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.391196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.391226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.391569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.391929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.391958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.392243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.392540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.830 [2024-06-11 15:17:30.392569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.830 qpair failed and we were unable to recover it. 00:32:11.830 [2024-06-11 15:17:30.392783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.393096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.393128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.393366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.393650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.393680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.393900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.394124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.394154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.394429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.394644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.394674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.394959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.395241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.395271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.395615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.395827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.395856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.396172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.396508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.396538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.396770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.396989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.397019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.397244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.397516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.397546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.397764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.398125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.398156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.398507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.398802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.398832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.399142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.399435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.399464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.399737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.400004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.400042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.400411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.400691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.400720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.401013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.401243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.401273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.401692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.401962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.401992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.402238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.402536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.402565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.402837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.403047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.403079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.403364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.403637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.403666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.403889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.404184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.404216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.404489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.404715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.404744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.405107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.405522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.405551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.405838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.406068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.406099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.406336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.406563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.406593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.406962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.407329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.407360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.407507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.407874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.407902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.408147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.408430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.408459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.408686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.408905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.408934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.831 qpair failed and we were unable to recover it. 00:32:11.831 [2024-06-11 15:17:30.409299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.831 [2024-06-11 15:17:30.409524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.409553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.409774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.410054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.410084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.410461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.411355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.411400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.411636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.411974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.412005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.412388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.412593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.412624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.412917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.413229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.413260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.413548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.413779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.413809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.414185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.414470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.414499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.414731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.414939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.414969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.415216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.415438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.415468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.415704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.415978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.416008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.416306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.416548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.416578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.416861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.417272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.417323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.417621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.417892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.417922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.418214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.418514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.418543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.418823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.419055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.419085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.419312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.419516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.419546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.419887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.420225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.420277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.420588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.420892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.420921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.421264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.421608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.421637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.421939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.422152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.422181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.422522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.422801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.422830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.423122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.423351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.423386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.423619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.423890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.423921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.424164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.424501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.424530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.424819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.425112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.425143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.425435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.425713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.425743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.425955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.426224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.426254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.832 [2024-06-11 15:17:30.426532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.426805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.832 [2024-06-11 15:17:30.426835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.832 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.427089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.427402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.427431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.427729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.428064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.428094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.428297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.428572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.428602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.428841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.429143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.429182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.429463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.429768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.429797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.430022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.430334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.430363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.430606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.430896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.430925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.431149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.431424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.431454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.431752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.432092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.432122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.432436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.432648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.432677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.432961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.433186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.433217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.433450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.433665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.433695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.433966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.434313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.434344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.434587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.434861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.434896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.435259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.435528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.435557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.435856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.436195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.436225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.436502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.436720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.436750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.437048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.437411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.437440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.437654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.437941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.437971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.438270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.438467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.438496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.438794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.439166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.439196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.439534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.439827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.439856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.440071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.440385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.440414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.440643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.441001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.441061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.441388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.441684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.441714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.442054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.442349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.833 [2024-06-11 15:17:30.442378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.833 qpair failed and we were unable to recover it. 00:32:11.833 [2024-06-11 15:17:30.442602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.442881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.442911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.443120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.443335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.443364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.443694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.443963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.443992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.444275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.444587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.444616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.444980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.445335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.445366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.445581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.445801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.445830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.446044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.446316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.446345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.446620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.446981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.447010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.447374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.447711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.447740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.447962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.448268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.448299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.448511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.448792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.448822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.449113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.449449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.449479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.449753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.449980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.450011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.450411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.450611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.450640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.450991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.451217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.451248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.451523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.451795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.451824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.452141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.452455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.452484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.452730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.453019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.453058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.453365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.453511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.453540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.453812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.454022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.454061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.454401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.454739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.454768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.455122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.455344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.455373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.455685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.456049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.456080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.834 qpair failed and we were unable to recover it. 00:32:11.834 [2024-06-11 15:17:30.456292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.834 [2024-06-11 15:17:30.456575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.456605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.456882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.457246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.457276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.457587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.457815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.457845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.458059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.458353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.458383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.458656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.458929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.458958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.459251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.459615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.459645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.459949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.460216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.460247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.460530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.460806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.460835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.461156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.461551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.461580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.461817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.462152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.462182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.462492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.462698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.462726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.463068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.463405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.463434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.463726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.464064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.464094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.464458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.464752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.464782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.465057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.465394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.465424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.465720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.466082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.466113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.466349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.466639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.466668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.466885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.467238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.467269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.467487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.467792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.467821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.468097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.468492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.468521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.468747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.469062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.469093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.469381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.469720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.469750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.470042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.470322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.470352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.470632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.470834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.470862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.471230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.471432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.471461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.471753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.472092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.472122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.472422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.472722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.472751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.473128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.473484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.473513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.473750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.474034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.835 [2024-06-11 15:17:30.474066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.835 qpair failed and we were unable to recover it. 00:32:11.835 [2024-06-11 15:17:30.474358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.474624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.474653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.474876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.475161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.475191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.475535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.475809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.475839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.476051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.476336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.476366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.476735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.477074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.477104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.477378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.477651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.477681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.477968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.478238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.478268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.478585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.478973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.479003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.479317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.479627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.479657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.479880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.480169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.480199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.480575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.480912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.480941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.481282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.481574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.481604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.481892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.482127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.482158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.482461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.482669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.482699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.482931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.483157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.483188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.483462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.483744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.483774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.484123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.484461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.484491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.484778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.485048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.485079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.485294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.485490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.485520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.485832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.486128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.486158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.486403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.486618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.486647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.486964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.487303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.487334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.487618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.487857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.487886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.488168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.488392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.488423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.488776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.489001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.489037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.489328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.489538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.489568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.489837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.490131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.490161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.490381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.490717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.490746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.836 [2024-06-11 15:17:30.491060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.491371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.836 [2024-06-11 15:17:30.491401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.836 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.491694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.491914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.491943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.492333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.492556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.492585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.492834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.493117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.493148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.493521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.493730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.493760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.494087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.494394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.494424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.494731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.495000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.495049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.495362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.495733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.495762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.496057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.496360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.496390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.496730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.497066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.497096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.497324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.497660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.497689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.498049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.498319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.498348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.498634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.498901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.498931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.499292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.499514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.499542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.499764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.500050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.500080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.500305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.500525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.500554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.500780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.501071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.501102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.501381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.501736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.501767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.502161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.502386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.502416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.502632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.502891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.502921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.503315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.503652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.503681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.503983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.504266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.504297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.504602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.504882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.504912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.505229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.505583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.505613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.505842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.506077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.506107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.506392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.506676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.506706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.506928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.507137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.507167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.507443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.507804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.507833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.507991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.508371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.508402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.837 qpair failed and we were unable to recover it. 00:32:11.837 [2024-06-11 15:17:30.508772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.837 [2024-06-11 15:17:30.509046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.509076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.509363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.509587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.509616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.509965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.510236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.510266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.510585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.510821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.510850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.511143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.511429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.511458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.511737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.512050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.512080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.512452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.512677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.512707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.513072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.513427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.513456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.513682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.513994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.514024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.514271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.514574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.514608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.514837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.515179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.515209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.515570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.515858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.515887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.516119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.516403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.516432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.516717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.517051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.517082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.517396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.517677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.517706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.518062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.518420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.518449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.518789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.519173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.519203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.519509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.519795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.519824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.520144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.520382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.520412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.520726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.521005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.521050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.521329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.521712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.521742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.522016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.522302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.522331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.522677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.522971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.522998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.523310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.523594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.523621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.523833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.524060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.524088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.524465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.524739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.524766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.525081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.525421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.525448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.838 qpair failed and we were unable to recover it. 00:32:11.838 [2024-06-11 15:17:30.525726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.526014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.838 [2024-06-11 15:17:30.526051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.526396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.526764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.526793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.527082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.527388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.527423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.527792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.527998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.528054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.528345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.528619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.528649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.528877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.529172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.529202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.529430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.529722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.529751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.530040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.530254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.530284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.530652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.530934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.530963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.531258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.531591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.531620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.532000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.532283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.532313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.532683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.533035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.533066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.533349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.533664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.533700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.533867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.534233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.534264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.534608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.534831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.534860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.535094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.535314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.535343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.535710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.536021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.536074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.536356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.536719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.536748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.537088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.537405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.537434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.537798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.538158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.538189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.538464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.538799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.538829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.539133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.539438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.539468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.539809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.540034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.540064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.540418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.540756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.540786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.839 qpair failed and we were unable to recover it. 00:32:11.839 [2024-06-11 15:17:30.541058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.839 [2024-06-11 15:17:30.541358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.541387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.541701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.541910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.541939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.542221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.542504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.542533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.542687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.542990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.543019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.543333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.543631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.543660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.543944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.544279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.544310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.544590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.544873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.544902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.545186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.545570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.545599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.545938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.546278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.546309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.546690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.546898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.546927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.547146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.547487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.547516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.547886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.548128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.548159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.548455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.548720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.548749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.549114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.549332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.549360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.549641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.549934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.549963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.550255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.550564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.550593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.550819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.551049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.551079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.551394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.551702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.551731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.551957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.552378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.552408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.552741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.552966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.552995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.553287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.553648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.553678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.554068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.554339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.554368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.554685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.554991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.555021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.555320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.555660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.555689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.555914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.556133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.556165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.556444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.556709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.556738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.557044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.557382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.557411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.557634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.557856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.557886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.558114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.558340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.558369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.840 qpair failed and we were unable to recover it. 00:32:11.840 [2024-06-11 15:17:30.558741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.840 [2024-06-11 15:17:30.559052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.559082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.559426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.559710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.559740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.560094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.560456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.560486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.560826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.561182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.561213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.561528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.561813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.561842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.562183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.562533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.562561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.562843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.563208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.563238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.563533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.563799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.563829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.564115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.564455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.564484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.564841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.565199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.565229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.565561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.565845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.565875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.566168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.566370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.566398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.566762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.566979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.567009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.567301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.567693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.567722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.568003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.568293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.568324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.568665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.568873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.568903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.569215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.569550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.569579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.569779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.570046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.570076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.570432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.570740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.570769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.570987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.571305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.571336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.571582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.571934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.571964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.572261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.572535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.572564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.572855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.573218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.573249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.573538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.573836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.573866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.574141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.574443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.574472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.574756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.575046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.575076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.575356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.575653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.575683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.576059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.576382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.576412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.576692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.576888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.576920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.841 qpair failed and we were unable to recover it. 00:32:11.841 [2024-06-11 15:17:30.577211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.841 [2024-06-11 15:17:30.577437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.577468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.577760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.578146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.578186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.578423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.578706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.578736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.579046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.579267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.579297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.579573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.579919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.579948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.580245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.580586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.580616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.580831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.581204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.581234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.581519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.581802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.581832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.582066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.582280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.582310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.582602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.582954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.582983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.583281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.583505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.583535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.583824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.584061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.584092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.584373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.584656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.584686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.584894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.585235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.585266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.585549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.585822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.585852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.586133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.586417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.586446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.586739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.587035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.587066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.587299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.587536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.587565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.587857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.588077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.588109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.588397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.588763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.588793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.589096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.589299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.589329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.589553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.589950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.589984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.590312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.590675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.590704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.591045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.591322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.591351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.591701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.592063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.592093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.592383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.592747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.592777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.593001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.593248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.593279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.593652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.594022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.594063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.842 [2024-06-11 15:17:30.594407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.594709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.842 [2024-06-11 15:17:30.594738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.842 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.594971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.595267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.595298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.595626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.595905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.595935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.596159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.596446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.596476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.596823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.597171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.597202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.597547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.597910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.597940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.598280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.598558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.598588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.598806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.599113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.599143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.599428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.599732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.599762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.600066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.600361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.600390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.600760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.601041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.601072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.601372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.601709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.601737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.602015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.602306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.602335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.602609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.602903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.602933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.603156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.603393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.603423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.603741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.603956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.603985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.604340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.604570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.604599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.604946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.605244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.605274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.605563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.605874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.605904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.606177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.606393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.606423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.606702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.606986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.607015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.607316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.607652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.607681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.607989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.608281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-06-11 15:17:30.608312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-06-11 15:17:30.608661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.608878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.608907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.609130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.609410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.609439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.609756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.610046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.610076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.610300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.610572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.610601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.610819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.611216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.611247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.611531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.611912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.611941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.612163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.612443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.612472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.612722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.613079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.613109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.613316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.613683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.613712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.614018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.614365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.614395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.614619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.614814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.614844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.615185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.615369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.615398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.615763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.615989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.616018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.616296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.616559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.616587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.616815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.617089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.617120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.617412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.617772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.617802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.618095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.618381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.618410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.618761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.618976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.619005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.619380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.619669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.619698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.620049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.620324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.620353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.620638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.620939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.620968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.621279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.621547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.621582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.621804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.621946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.621976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.622279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.622631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.622660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.623001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.623293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.623322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.623665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.623942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.623971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.624348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.624553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.624582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.624854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.625124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.625154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-06-11 15:17:30.625543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-06-11 15:17:30.625877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.625906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.626183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.626450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.626478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.626764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.627148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.627178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.627470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.627687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.627721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.628007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.628380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.628410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.628626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.628968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.628997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.629356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.629693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.629722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.629995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.630368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.630398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.630711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.630933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.630962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.631263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.631603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.631633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.631910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.632245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.632275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.632615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.632887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.632916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.633259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.633460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.633489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.633837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.634223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.634253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.634608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.634998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.635035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.635381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.635588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.635617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.635955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.636233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.636264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.636611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.636974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.637005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.637288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.637522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.637552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.637866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.638228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.638258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.638559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.638898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.638926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.639200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.639503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.639532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.639818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.640090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.640121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.640414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.640725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.640755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.641117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.641398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.641428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.641816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.642158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.642188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.642428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.642702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.642731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.643043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.643347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.643377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.643694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.644005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.644059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.845 qpair failed and we were unable to recover it. 00:32:11.845 [2024-06-11 15:17:30.644284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.845 [2024-06-11 15:17:30.644568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.644597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.644939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.645157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.645186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.645487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.645763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.645792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.646106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.646385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.646413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.646712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.647100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.647130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.647350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.647719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.647748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.648089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.648446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.648475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.648816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.649231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.649261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.649556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.649755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.649785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.650088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.650397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.650426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.650643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.651017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.651055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.651222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.651585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.651613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.651840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.652121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.652151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.652473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.652744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.652773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.653058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.653278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.653306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.653675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.654073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.654103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.654465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.654682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.654711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.654995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.655284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.655314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.655590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.655881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.655910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.656129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.656465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.656495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.656836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.657202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.657233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.657607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.657994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.658023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.658312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.658603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.658632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.658974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.659343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.659374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.659595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.659859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.659888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.660228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.660439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.660473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.660762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.660980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.661008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.661360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.661715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.846 [2024-06-11 15:17:30.661744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:11.846 qpair failed and we were unable to recover it. 00:32:11.846 [2024-06-11 15:17:30.662088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.662356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.662386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.662755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.662984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.663013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.663390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.663601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.663630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.663941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.664206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.664236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.664584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.664943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.664972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.665208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.665424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.665453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.665669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.665952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.665982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.666219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.666495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.666525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.666897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.667116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.667147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.667437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.667773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.667801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.668088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.668325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.668354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.668648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.668865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.668895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.669236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.669505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.669535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.669884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.670236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.670267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.670584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.670921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.670950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.671237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.671572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.671601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.671872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.672236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.672266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.672576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.672917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.672947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.673306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.673618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.673648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.673967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.674252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.674282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.115 qpair failed and we were unable to recover it. 00:32:12.115 [2024-06-11 15:17:30.674504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.115 [2024-06-11 15:17:30.674712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.674741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.675108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.675450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.675478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.675763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.676048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.676078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.676381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.676661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.676691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.676983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.677243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.677274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.677562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.677856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.677886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.678252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.678613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.678643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.678986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.679332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.679363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.679677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.680044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.680075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.680293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.680597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.680627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.680846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.681118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.681148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.681449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.681727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.681756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.682116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.682404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.682433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.682716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.682917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.682946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.683238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.683516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.683545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.683826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.684113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.684142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.684514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.684782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.684811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.685022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.685320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.685350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.685637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.685929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.685959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.686329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.686548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.686578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.686946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.687223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.687252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.687484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.687861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.687890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.116 qpair failed and we were unable to recover it. 00:32:12.116 [2024-06-11 15:17:30.688230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.688588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.116 [2024-06-11 15:17:30.688617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.688848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.689121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.689151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.689378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.689716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.689746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.690045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.690347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.690377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.690603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.690820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.690849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.691191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.691535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.691564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.691863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.692202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.692238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.692515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.692851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.692881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.693225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.693596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.693626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.693897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.694174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.694204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.694512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.694862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.694891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.695111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.695397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.695427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.695652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.696016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.696053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.696341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.696553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.696582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.696874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.697147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.697177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.697521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.697872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.697901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.698243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.698399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.698428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.698806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.699117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.699147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.699425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.699644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.699674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.700048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.700258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.700288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.700634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.700903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.700932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.701235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.701573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.701602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.701969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.702258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.702287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.702583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.702942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.702971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.703192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.703461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.703490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.703834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.704186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.704216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.704501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.704787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.704816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.705095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.705375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.705404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.117 [2024-06-11 15:17:30.705746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.705967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.117 [2024-06-11 15:17:30.705996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.117 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.706211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.706518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.706547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.706783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.707120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.707150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.707442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.707734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.707764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.708066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.708397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.708427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.708728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.709014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.709056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.709420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.709704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.709734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.709959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.710321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.710351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.710694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.710984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.711014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.711378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.711742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.711771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.712088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.712357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.712386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.712728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.713013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.713053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.713323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.713612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.713642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.714009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.714377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.714407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.714651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.714875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.714904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.715268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.715613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.715642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.716011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.716392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.716423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.716580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.716941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.716970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.717338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.717618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.717647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.717947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.718236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.718267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.718543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.718881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.718910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.719266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.719637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.719667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.720014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.720324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.720353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.720701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.720989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.721019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.721379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.721591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.721620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.721989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.722384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.118 [2024-06-11 15:17:30.722414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.118 qpair failed and we were unable to recover it. 00:32:12.118 [2024-06-11 15:17:30.722696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.723058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.723088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.723240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.723560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.723590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.723954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.724290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.724321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.724690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.724914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.724950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.725305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.725636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.725665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.726043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.726322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.726350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.726631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.726925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.726954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.727178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.727456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.727486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.727856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.728146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.728176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.728537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.728895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.728924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.729263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.729601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.729630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.729900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.730247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.730277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.730505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.730843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.730873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.731150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.731441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.731475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.731785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.732096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.732126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.732439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.732720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.732750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.733119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.733339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.733368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.733596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.733864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.733894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.734178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.734352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.734381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.734665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.734946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.734976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.735383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.735613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.735642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.735915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.736323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.736354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.736668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.736940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.736969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.737296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.737659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.737688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.737977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.738257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.738287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.738515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.738873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.738902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.739254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.739549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.739578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.119 qpair failed and we were unable to recover it. 00:32:12.119 [2024-06-11 15:17:30.739885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.740179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.119 [2024-06-11 15:17:30.740210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.740513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.740852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.740881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.741033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.741312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.741341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.741561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.741898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.741927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.742142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.742354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.742383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.742753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.742969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.742998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.743486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.743809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.743848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.744173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.744397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.744429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.744736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.745043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.745074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.745452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.745689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.745718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.745887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.746171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.746201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.746454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.746801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.746830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.747152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.747511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.747540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.747839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.748083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.748114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.748341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.748621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.748650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.748991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.749355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.749385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.749672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.749888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.749918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.750081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.750394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.750423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.750766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.751107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.751138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.751427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.751795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.751825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.120 [2024-06-11 15:17:30.752219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.752437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.120 [2024-06-11 15:17:30.752467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.120 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.752695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.752903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.752932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.753234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.753439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.753468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.753781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.753989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.754018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.754343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.754560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.754589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.754868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.755182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.755212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.755494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.755834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.755862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.756152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.756456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.756486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.756763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.757046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.757075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.757351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.757684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.757714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.758055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.758351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.758381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.758614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.758919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.758948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.759233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.759520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.759549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.759907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.760211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.760242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.760516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.760901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.760930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.761202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.761539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.761568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.761805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.762031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.762062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.762430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.762795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.762825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.763048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.763381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.763410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.763707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.764070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.764099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.764332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.764635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.764665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.764981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.765371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.765402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.765825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.766115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.766144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.766427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.766641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.766670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.766896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.767232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.767263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.767481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.767760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.767790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.768130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.768432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.768461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.768865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.769135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.769166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.121 qpair failed and we were unable to recover it. 00:32:12.121 [2024-06-11 15:17:30.769483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.121 [2024-06-11 15:17:30.769764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.769793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.770076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.770362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.770391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.770758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.771114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.771144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.771509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.771854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.771884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.772107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.772407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.772438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.772780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.773084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.773114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.773454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.773823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.773852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.774157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.774465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.774494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.774885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.775223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.775253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.775490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.775780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.775810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.776046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.776264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.776294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.776609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.776997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.777036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.777316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.777535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.777564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.777788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.778071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.778102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.778343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.778733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.778762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.779070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.779293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.779322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.779542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.779838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.779866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.780110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.780388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.780417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.780626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.780988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.781018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.781366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.781712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.781742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.782084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.782368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.782398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.782769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.783112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.783143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.783484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.783800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.783829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.784107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.784473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.784502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.784787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.785010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.785047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.785413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.785696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.785725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.122 [2024-06-11 15:17:30.786106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.786382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.122 [2024-06-11 15:17:30.786412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.122 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.786694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.787049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.787080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.787358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.787570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.787599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.787887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.788065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.788101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.788498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.788730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.788760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.788993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.789363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.789393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.789568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.789799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.789828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.790142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.790423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.790452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.790823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.791202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.791232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.791626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.791962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.791991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.792278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.792635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.792664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.793037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.793313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.793343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.793633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.793904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.793932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.794213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.794485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.794519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.794864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.795237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.795267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.795610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.795888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.795917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.796091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.796359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.796389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.796765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.797106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.797137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.797456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.797740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.797770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.797985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.798327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.798357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.798664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.798878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.798907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.799214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.799435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.799464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.799838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.800165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.800195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.800488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.800753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.800788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.801064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.801279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.801308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.801590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.801953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.801983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.802335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.802550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.802580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.123 [2024-06-11 15:17:30.802889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.803255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.123 [2024-06-11 15:17:30.803286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.123 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.803516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.803860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.803890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.804241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.804513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.804541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.804834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.805198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.805228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.805439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.805738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.805767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.806058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.806336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.806366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.806680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.807046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.807081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.807376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.807710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.807740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.808081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.808235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.808265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.808624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.808912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.808941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.809230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.809560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.809589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.809888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.810164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.810194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.810486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.810801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.810830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.811136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.811408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.811438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.811722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.812018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.812056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.812282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.812553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.812582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.812926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.813153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.813183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.813400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.813739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.813768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.814045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.814389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.814419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.814708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.815047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.815079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.815372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.815738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.815768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.816085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.816448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.816477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.816819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.817053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.817083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.817361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.817646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.817675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.818017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.818396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.818425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.818847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.819187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.819218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-06-11 15:17:30.819559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.819838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-06-11 15:17:30.819867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.820217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.820513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.820542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.820818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.821086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.821116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.821403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.821667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.821696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.821977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.822211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.822241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.822458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.822795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.822823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.823122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.823351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.823381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.823743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.824017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.824054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.824395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.824679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.824707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.825063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.825401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.825430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.825718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.825925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.825954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.826246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.826470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.826500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.826869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.827138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.827168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.827336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.827630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.827658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.828000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.828293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.828322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.828662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.828885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.828914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.829205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.829541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.829570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.829912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.830231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.830262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.830536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.830811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.830840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.831128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.831272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.831301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.831516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.831794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.831823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.832114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.832474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.832503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.832873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.833250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-06-11 15:17:30.833280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-06-11 15:17:30.833483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.833754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.833783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.834012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.834292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.834321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.834579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.834849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.834878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.835218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.835557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.835586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.835874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.836234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.836285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.836605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.836955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.836985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.837361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.837568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.837596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.837884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.838165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.838195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.838543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.838923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.838952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.839313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.839629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.839658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.839905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.840129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.840159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.840378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.840658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.840688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.841054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.841335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.841364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.841733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.842019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.842057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.842307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.842641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.842670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.843010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.843293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.843322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.843597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.843988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.844018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.844303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.844613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.844643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.844935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.845213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.845244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.845585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.845866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.845896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.846173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.846452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.846481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.846829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.847192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.847222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.847508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.847717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.847746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.847974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.848323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.848353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.848698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.848977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.849006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.849396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.849686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.849716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.850037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.850347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-06-11 15:17:30.850376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-06-11 15:17:30.850656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.850991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.851019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.851376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.851599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.851628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.851969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.852303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.852334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.852676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.852982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.853012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.853303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.853594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.853624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.853854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.854131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.854162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.854523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.854927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.854956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.855234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.855594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.855623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.855844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.856106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.856136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.856362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.856644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.856673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.856904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.857269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.857300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.857627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.857992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.858021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.858303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.858586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.858615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.858904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.859184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.859214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.859505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.859804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.859833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.860117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.860426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.860456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.860675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.860895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.860925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.861217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.861488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.861517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.861795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.862072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.862103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.862464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.862746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.862775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.862991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.863312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.863343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.863735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.863961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.863991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.864362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.864661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.864690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.865058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.865345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.865375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.865603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.865895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.865924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.866149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.866436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.866466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-06-11 15:17:30.866782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-06-11 15:17:30.867058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.867088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.867377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.867669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.867699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.867940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.868155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.868185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.868553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.868768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.868797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.869079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.869356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.869385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.869601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.869939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.869970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.870260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.870534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.870563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.870862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.871088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.871118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.871426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.871801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.871831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.872203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.872425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.872453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.872747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.873042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.873072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.873386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.873766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.873795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.874020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.874230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.874259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.874549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.874859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.874888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.875261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.875525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.875554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.875829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.876116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.876146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.876438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.876770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.876799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.877145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.877445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.877474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.877816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.878042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.878072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.878362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.878583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.878613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.878831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.879068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.879098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.879440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.879791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.879820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.880097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.880369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.880398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.880616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.880946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.880975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.881196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.881557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.881586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.881820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.882096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.882130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.882477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.882688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.882718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.883069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.883338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-06-11 15:17:30.883368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-06-11 15:17:30.883666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.883876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.883905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.884246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.884607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.884636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.884854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.885138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.885167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.885393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.885682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.885712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.885985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.886276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.886305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.886534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.886830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.886860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.887211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.887493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.887521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.887731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.887999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.888040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.888326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.888700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.888729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.889039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.889253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.889281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.889556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.889913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.889942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.890189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.890517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.890545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.890903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.891212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.891242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.891558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.891846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.891875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.892156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.892513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.892542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.892828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.893065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.893096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.893314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.893523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.893552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.893889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.894225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.894260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.894537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.894764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.894793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.895079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.895449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.895478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.895823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.896105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.896135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.896426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.896649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.896679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.897048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.897405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.897434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.897751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.898036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.898066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.898358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.898564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.898593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-06-11 15:17:30.898935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-06-11 15:17:30.899248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.899277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.899619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.899823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.899852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.900134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.900354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.900388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.900672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.900957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.900985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.901266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.901606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.901635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.901918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.902140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.902170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.902567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.902832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.902860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.903160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.903495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.903524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.903746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.904052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.904082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.904424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.904729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.904758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.905101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.905472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.905500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.905864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.906142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.906171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.906390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.906685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.906713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.907092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.907474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.907503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.907789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.908103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.908132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.908439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.908638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.908667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.909011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.909233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.909262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.909630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.909944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.909974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-06-11 15:17:30.910281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.910562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-06-11 15:17:30.910591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.910895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.911230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.911261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.911634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.911853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.911882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.912165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.912449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.912477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.912761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.913042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.913072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.913364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.913638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.913667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.913948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.914172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.914201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.914542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.914881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.914910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.915204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.915489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.915517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.915857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.916053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.916083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.916369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.916634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.916663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.917017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.917337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.917367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.917606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.917821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.917850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.918221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.918582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.918611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.918899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.919263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.919293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.919595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.919917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.919946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.920168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.920396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.920425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.920641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.920905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.920934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.921305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.921671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.921700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.921916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.922303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.922333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.922611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.922906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.922935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.923222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.923436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.923466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.923834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.924210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.924240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.924557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.924894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.924923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.925160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.925446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.925475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.925764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.926124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.926155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.926502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.926725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.926753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.927093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.927430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-06-11 15:17:30.927459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-06-11 15:17:30.927703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.927923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.927952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.928319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.928654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.928684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.929054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.929390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.929420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.929803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.930103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.930133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.930434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.930798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.930826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.931047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.931385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.931415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.931785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.932152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.932182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.932459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.932762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.932791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.933103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.933468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.933497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.933728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.934094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.934125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.934398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.934763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.934791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.935133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.935415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.935445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.935724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.936037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.936066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.936435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.936798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.936827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.937141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.937505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.937534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.937844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.938117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.938146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.938364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.938743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.938772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.939144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.939430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.939459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.939742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.939958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.939987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.940217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.940513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.940543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.940783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.941115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.941146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.941496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.941859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.941888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.942178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.942454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.942482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.942761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.943136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.943167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-06-11 15:17:30.943480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-06-11 15:17:30.943751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.943780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-06-11 15:17:30.944062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.944399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.944428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-06-11 15:17:30.944771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.944927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.944956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-06-11 15:17:30.945311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.945646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.945674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-06-11 15:17:30.946018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.946306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.946336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-06-11 15:17:30.946703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.946995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.947036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-06-11 15:17:30.947280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.947585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-06-11 15:17:30.947614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.947923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.948260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.948291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.948577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.948844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.948873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.949236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.949600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.949629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.949903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.950265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.950295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.950574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.950797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.950825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.951200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.951485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.951515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.951812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.951970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.951999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.952243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.952603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.952632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.952927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.953207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.953237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.953532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.953834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.953863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.954072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.954344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.954374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.954648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.954928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.954958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.955314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.955595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.955624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.955993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.956308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.956338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.956624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.956836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.956864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.957088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.957436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.957466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.957768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.958044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.958075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.958436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.958795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.958824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.959119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.959418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.959447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.959669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.959978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.960007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.401 qpair failed and we were unable to recover it. 00:32:12.401 [2024-06-11 15:17:30.960334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.960612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.401 [2024-06-11 15:17:30.960641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.960926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.961235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.961266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.961550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.961826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.961855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.962193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.962531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.962560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.962896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.963177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.963208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.963576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.963911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.963939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.964226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.964568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.964597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.964874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.965111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.965141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.965460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.965676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.965706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.965939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.966294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.966324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.966693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.966907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.966937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.967217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.967487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.967515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.967802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.968083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.968113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.968405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.968702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.968731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.968955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.969187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.969217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.969434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.969768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.969797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.969958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.970222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.970253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.970542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.970754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.970782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.971127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.971357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.971386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.971772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.972138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.972169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.972512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.972789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.972818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.973045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.973336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.973365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.973591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.973804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.973832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.974171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.974515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.974544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.974777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.975080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.975110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.975406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.975725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.975754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.976100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.976502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.976531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.976757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.977037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.977067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.977347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.977548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.977577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.402 qpair failed and we were unable to recover it. 00:32:12.402 [2024-06-11 15:17:30.977798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.402 [2024-06-11 15:17:30.978072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.978103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.978417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.978657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.978686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.978959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.979324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.979354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.979712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.979981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.980011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.980295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.980632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.980662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.980871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.981163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.981193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.981489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.981757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.981787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.982007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.982341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.982377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.982780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.983116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.983146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.983376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.983682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.983711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.983999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.984282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.984313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.984534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.984896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.984925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.985290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.985653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.985682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.985838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.986117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.986147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.986473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.986751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.986780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.987119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.987326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.987355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.987643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.987991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.988020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.988312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.988585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.988620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.988924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.989135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.989165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.989441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.989724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.989753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.989972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.990196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.990227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.990570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.990862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.990892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.991079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.991360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.991389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.991668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.992011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.992048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.992406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.992742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.992771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.993073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.993383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.993412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.993777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.994112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.994142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.994427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.994789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.994825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.995117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.995397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.403 [2024-06-11 15:17:30.995427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.403 qpair failed and we were unable to recover it. 00:32:12.403 [2024-06-11 15:17:30.995697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.995891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.995920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:30.996259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.996619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.996648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:30.996881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.997177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.997207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:30.997495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.997784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.997812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:30.998155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.998451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.998479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:30.998794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.999072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.999102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:30.999443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.999799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:30.999827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.000136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.000437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.000465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.000789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.001146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.001182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.001452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.001734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.001763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.002048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.002320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.002350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.002628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.003013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.003060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.003361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.003643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.003673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.003971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.004188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.004218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.004495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.004714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.004743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.005140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.005505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.005534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.005822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.006091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.006121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.006424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.006726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.006754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.006974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.007375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.007405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.007696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.007965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.007994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.008336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.008620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.008649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.008918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.009307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.009337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.009567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.009784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.009813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.010180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.010404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.010432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.010725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.011081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.011110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.011390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.011753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.011783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.012002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.012227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.012259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.012603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.012892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.012922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.404 qpair failed and we were unable to recover it. 00:32:12.404 [2024-06-11 15:17:31.013246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.013521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.404 [2024-06-11 15:17:31.013550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.013782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.014119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.014149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.014444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.014712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.014741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.015113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.015398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.015427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.015790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.016054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.016084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.016453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.016766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.016795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.017085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.017356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.017385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.017671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.017957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.017986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.018335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.018630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.018659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.018960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.019243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.019272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.019577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.019966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.019995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.020306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.020678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.020707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.020944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.021238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.021268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.021585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.021946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.021976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.022291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.022562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.022592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.022946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.023309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.023339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.023579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.023866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.023895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.024133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.024495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.024524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.024732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.025080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.025110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.025478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.025765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.025794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.026016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.026332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.026361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.405 [2024-06-11 15:17:31.026650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.027043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.405 [2024-06-11 15:17:31.027074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.405 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.027442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.027727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.027757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.028058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.028355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.028386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.028672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.028876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.028905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.029179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.029446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.029475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.029758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.030039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.030070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.030346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.030542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.030571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.030783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.031070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.031099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.031467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.031767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.031797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.032033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.032349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.032378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.032743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.033044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.033075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.033296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.033689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.033720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.034068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.034433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.034461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.034688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.034999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.035035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.035250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.035590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.035620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.035941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.036302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.036332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.036734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.037038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.037068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.037436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.037718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.037748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.038090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.038431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.038461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.038776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.038985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.039015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.039315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.039594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.039624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.039844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.040201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.040231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.040601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.040870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.040899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.041189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.041461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.041490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.041862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.042160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.042190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.042411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.042636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.042666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.043037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.043331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.043360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.043714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.043942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.043971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.044312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.044701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.044730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.406 qpair failed and we were unable to recover it. 00:32:12.406 [2024-06-11 15:17:31.045103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.406 [2024-06-11 15:17:31.045326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.045355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.045731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.046014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.046065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.046302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.046671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.046699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.047045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.047315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.047345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.047686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.047962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.047990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.048293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.048607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.048636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.048952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.049307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.049337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.049679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.049987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.050016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.050318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.050638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.050667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.051035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.051255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.051285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.051654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.052018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.052068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.052381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.052600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.052629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.052948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.053229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.053259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.053473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.053780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.053809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.054158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.054444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.054473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.054762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.055066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.055096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.055379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.055659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.055688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.055980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.056338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.056368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.056732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.057115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.057146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.057378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.057733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.057762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.058060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.058361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.058390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.058690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.058961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.058991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.059297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.059638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.059666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.059956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.060313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.060343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.060571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.060850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.060879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.061107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.061392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.061421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.061784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.062174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.062205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.062510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.062796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.062825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.063044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.063268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.063298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.407 qpair failed and we were unable to recover it. 00:32:12.407 [2024-06-11 15:17:31.063570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.407 [2024-06-11 15:17:31.063843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.063873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.064153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.064432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.064461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.064796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.065091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.065122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.065419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.065751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.065781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.066073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.066439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.066468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.066835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.067174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.067204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.067431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.067659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.067688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.067897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.068167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.068196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.068481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.068756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.068785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.069158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.069498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.069527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.069895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.070180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.070209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.070582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.070943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.070972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.071203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.071494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.071524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.071801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.072037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.072067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.072279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.072496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.072524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.072753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.072967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.072996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.073372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.073585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.073614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.073980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.074355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.074385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.074755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.075113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.075143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.075434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.075706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.075735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.076008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.076294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.076324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.076638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.076975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.077004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.077294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.077568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.077598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.077898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.078208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.078238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.078608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.078877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.078906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.079258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.079624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.079653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.079895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.080259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.080289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.080575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.080794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.080823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.081105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.081324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.081353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.408 qpair failed and we were unable to recover it. 00:32:12.408 [2024-06-11 15:17:31.081705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.408 [2024-06-11 15:17:31.081937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.081966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.082309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.082676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.082705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.082989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.083356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.083387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.083689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.083912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.083946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.084221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.084604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.084633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.084855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.085222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.085251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.085564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.085862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.085891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.086195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.086414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.086444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.086720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.086984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.087014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.087248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.087531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.087560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.087842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.088178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.088207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.088572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.088805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.088835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.089152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.089436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.089465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.089802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.090103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.090139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.090445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.090726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.090755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.090975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.091326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.091356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.091722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.092040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.092069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.092302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.092583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.092613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.092981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.093220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.093251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.093522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.093729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.093759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.094046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.094417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.094446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.094754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.095116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.095146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.095489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.095765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.095794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.096042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.096414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.096447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.096814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.097175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.097206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.097545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.097833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.097862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.098163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.098439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.098468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.098835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.099195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.099225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.409 [2024-06-11 15:17:31.099498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.099831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.409 [2024-06-11 15:17:31.099860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.409 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.100151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.100359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.100387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.100665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.100808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.100837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.100993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.101359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.101388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.101685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.102007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.102044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.102278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.102619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.102653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.103022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.103317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.103346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.103736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.104045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.104076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.104313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.104664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.104693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.104993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.105363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.105393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.105675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.106011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.106063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.106485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.106705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.106735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.410 qpair failed and we were unable to recover it. 00:32:12.410 [2024-06-11 15:17:31.107020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.107301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.410 [2024-06-11 15:17:31.107330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.411 qpair failed and we were unable to recover it. 00:32:12.411 [2024-06-11 15:17:31.107650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.411 [2024-06-11 15:17:31.107849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.411 [2024-06-11 15:17:31.107877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.411 qpair failed and we were unable to recover it. 00:32:12.411 [2024-06-11 15:17:31.108264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.411 [2024-06-11 15:17:31.108607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.411 [2024-06-11 15:17:31.108636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.411 qpair failed and we were unable to recover it. 00:32:12.411 [2024-06-11 15:17:31.108999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.411 [2024-06-11 15:17:31.109271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.411 [2024-06-11 15:17:31.109301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.411 qpair failed and we were unable to recover it. 00:32:12.411 [2024-06-11 15:17:31.109683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.411 [2024-06-11 15:17:31.109976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.411 [2024-06-11 15:17:31.110005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.411 qpair failed and we were unable to recover it. 00:32:12.411 [2024-06-11 15:17:31.110357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.411 [2024-06-11 15:17:31.110737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.411 [2024-06-11 15:17:31.110766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.111068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.111373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.111402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.111688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.112067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.112097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.112495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.112765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.112794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.113135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.113495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.113524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.113827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.114132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.114163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.114504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.114867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.114896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.115260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.115476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.115505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.115777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.116137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.116167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.116446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.116712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.116742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.117043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.117322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.117351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.117646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.118006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.118055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.118287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.118557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.118587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.118943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.119225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.119255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.119627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.119927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.119956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.120241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.120521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.120550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.120830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.121211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.121241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.121474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.121827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.121856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.122202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.122506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.122535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.122831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.123192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.123222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.123591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.123808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.123838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.124060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.124408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.124437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.124777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.125114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.125145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.125518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.125798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.125828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.126127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.126407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.126437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.126811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.127100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.127130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.127422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.127701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.127731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.128016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.128379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.128408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.128789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.129073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.129103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.412 qpair failed and we were unable to recover it. 00:32:12.412 [2024-06-11 15:17:31.129426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.412 [2024-06-11 15:17:31.129735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.129765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.130056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.130252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.130282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.130642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.130932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.130961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.131230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.131592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.131622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.131907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.132132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.132161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.132438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.132786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.132815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.133108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.133324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.133354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.133721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.134083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.134113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.134391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.134707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.134736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.135101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.135465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.135494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.135798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.136089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.136118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.136402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.136618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.136648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.137001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.137362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.137393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.137752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.137981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.138010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.138310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.138591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.138620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.138992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.139360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.139391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.139762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.139985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.140014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.140309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.140591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.140620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.140915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.141124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.141155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.141454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.141671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.141701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.142040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.142280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.142309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.142587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.142948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.142977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.143354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.143567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.143595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.143961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.144171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.144201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.144545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.144769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.144798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.145152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.145459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.145489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.145883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.146183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.146213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.146502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.146783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.146812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.147129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.147519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.147549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.413 qpair failed and we were unable to recover it. 00:32:12.413 [2024-06-11 15:17:31.147894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.413 [2024-06-11 15:17:31.148229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.148277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.148526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.148868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.148897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.149243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.149542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.149571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.149874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.150095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.150125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.150444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.150805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.150834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.151179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.151405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.151434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.151725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.151937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.151966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.152247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.152548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.152577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.152864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.153086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.153117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.153403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.153618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.153647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.153935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.154307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.154337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.154703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.154929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.154958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.155299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.155591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.155620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.155898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.156120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.156150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.156441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.156638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.156667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.156902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.157211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.157242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.157584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.157976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.158006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.158370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.158588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.158617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.158955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.159169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.159199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.159483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.159797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.159827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.160178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.160553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.160583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.160871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.161180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.161211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.161550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.161813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.161843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.162070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.162407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.162437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.162712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.163081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.163111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.163379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.163716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.163746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.164113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.164475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.164505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.164875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.165091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.165122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.414 [2024-06-11 15:17:31.165488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.165704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.414 [2024-06-11 15:17:31.165734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.414 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.166122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.166485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.166514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.166855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.167141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.167172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.167468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.167764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.167794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.168034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.168310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.168340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.168680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.168961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.168991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.169298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.169579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.169609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.169982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.170334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.170365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.170589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.170953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.170982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.171279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.171499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.171528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.171923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.172290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.172321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.172561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.172849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.172878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.173096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.173385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.173415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.173696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.174059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.174090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.174438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.174645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.174674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.174961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.175256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.175286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.175629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.175899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.175928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.176213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.176606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.176634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.176977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.177328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.177358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.177635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.177836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.177865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.178249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.178552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.178581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.178976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.179268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.179298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.179510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.179874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.179903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.180276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.180615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.180650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.180931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.181268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.181298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.181574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.181861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.181890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.182175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.182480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.182509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.182735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.183000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.183037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.183391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.183754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.183783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.415 [2024-06-11 15:17:31.184077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.184376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.415 [2024-06-11 15:17:31.184405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.415 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.184773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.185134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.185164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.185403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.185763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.185792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.186160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.186370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.186399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.186694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.186982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.187016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.187395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.187709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.187738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.188104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.188319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.188348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.188634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.188905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.188934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.189163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.189526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.189555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.189837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.190102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.190133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.190438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.190716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.190745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.191035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.191371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.191400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.191714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.192058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.192088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.192430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.192773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.192801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.193202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.193537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.193572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.193856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.194137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.194167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.194456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.194817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.194846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.195092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.195452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.195481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.195802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.196164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.196194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.196471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.196753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.196782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.197127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.197464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.197493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.197663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.197983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.198012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.198308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.198593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.198622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.198895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.199165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.199195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.416 [2024-06-11 15:17:31.199539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.199853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.416 [2024-06-11 15:17:31.199888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.416 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.200187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.200468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.200497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.200783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.201147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.201177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.201476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.201753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.201783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.202070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.202355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.202384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.202697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.202968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.202997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.203334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.203613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.203642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.203928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.204230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.204260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.204609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.204884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.204912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.205233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.205518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.205547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.205950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.206173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.206204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.206587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.206862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.206892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.207116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.207456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.207485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.207766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.208039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.208070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.208451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.208797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.208826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.209183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.209471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.209500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.209855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.210189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.210219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.210442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.210710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.210738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.211017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.211362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.211392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.211698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.212054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.212084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.212310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.212668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.212698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.212988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.213413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.213443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.213848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.214204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.214235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.214516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.214746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.214775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.215139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.215430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.215459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.215755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.216032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.216063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.216415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.216753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.216782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.217122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.217418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.217447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.217735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.218016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.218055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.417 qpair failed and we were unable to recover it. 00:32:12.417 [2024-06-11 15:17:31.218384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.417 [2024-06-11 15:17:31.218601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.218630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.218839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.219111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.219141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.219359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.219571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.219600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.219842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.220069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.220098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.220394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.220729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.220759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.221124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.221348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.221377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.221590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.221861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.221889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.222229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.222583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.222612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.222926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.223242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.223273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.223562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.223861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.223889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.224262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.224483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.224513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.224799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.225088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.225119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.225408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.225749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.225778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.226171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.226488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.226517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.226801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.227070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.227101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.227393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.227614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.227643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.227917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.228197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.228226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.228518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.228805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.228834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.229109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.229332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.229360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.229639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.229978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.230006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.230389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.230673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.230703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.231047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.231375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.231405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.231690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.231981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.232011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.232253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.232518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.232547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.232914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.233279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.233308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.418 [2024-06-11 15:17:31.233535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.233867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.418 [2024-06-11 15:17:31.233896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.418 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.234127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.234403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.234432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.234797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.235036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.235067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.235352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.235675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.235704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.235991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.236364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.236394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.236696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.236969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.236998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.237350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.237618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.237646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.237942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.238324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.238355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.238699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.239065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.239095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.239482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.239784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.239813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.240096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.240381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.240410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.240691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.241033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.241063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.241373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.241643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.241672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.241990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.242268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.242298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.242594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.242977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.243006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.243291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.243628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.243657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.243969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.244306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.244337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.244635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.244849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.244877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.245183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.245518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.245547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.245766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.246062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.246093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.246393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.246683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.246712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.247071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.247342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.247371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.247642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.247920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.247949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.248250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.248639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.248668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.248958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.249227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.249257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.249540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.249768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.249797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.689 qpair failed and we were unable to recover it. 00:32:12.689 [2024-06-11 15:17:31.250018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.250332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.689 [2024-06-11 15:17:31.250362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.250702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.251048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.251078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.251350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.251579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.251608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.251900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.252234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.252281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.252559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.252897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.252925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.253211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.253547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.253575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.253807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.254103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.254134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.254357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.254636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.254665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.254944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.255282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.255313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.255597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.255801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.255830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.256131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.256401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.256430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.256716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.257071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.257100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.257383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.257695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.257725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.257873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.258113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.258144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.258423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.258707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.258737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.259046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.259272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.259301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.259642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.259917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.259946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.260311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.260610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.260640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.260992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.261283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.261313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.261664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.262004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.262056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.262269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.262606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.262636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.262926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.263212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.263242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.263560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.263893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.263923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.264200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.264373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.264402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.264751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.264962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.264992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.265340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.265559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.265589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.265871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.266236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.266266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.266562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.266925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.266954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.267257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.267488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.267517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.690 qpair failed and we were unable to recover it. 00:32:12.690 [2024-06-11 15:17:31.267830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.690 [2024-06-11 15:17:31.268047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.268077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.268430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.268715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.268745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.269038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.269438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.269467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.269688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.270038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.270069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.270378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.270801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.270830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.271109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.271401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.271430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.271725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.271945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.271973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.272336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.272628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.272657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.273034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.273319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.273349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.273691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.273975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.274004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.274377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.274670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.274699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.274925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.275205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.275235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.275515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.275803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.275832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.276048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.276409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.276438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.276684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.277018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.277056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.277374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.277654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.277683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.278053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.278416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.278444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.278757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.279067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.279098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.279418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.279700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.279730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.279959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.280225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.280255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.280549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.280774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.280803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.281173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.281449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.281478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.281821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.282185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.282220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.282457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.282821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.282850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.283075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.283392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.283421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.283711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.284043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.284073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.284412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.284680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.284710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.284958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.285244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.285274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.691 [2024-06-11 15:17:31.285617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.285974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.691 [2024-06-11 15:17:31.286004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.691 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.286316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.286593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.286622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.286961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.287241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.287272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.287639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.287944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.287973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.288253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.288535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.288570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.288916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.289141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.289171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.289559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.289892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.289921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.290124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.290485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.290514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.290832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.291110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.291140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.291374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.291736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.291765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.292053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.292258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.292286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.292512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.292870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.292899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.293265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.293559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.293587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.293876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.294192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.294222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.294511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.294813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.294848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.295192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.295551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.295580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.295949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.296342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.296371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.296660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.296950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.296979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.297201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.297485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.297514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.297789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.298123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.298153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.298445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.298811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.298840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.299140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.299358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.299387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.299705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.300066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.300095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.300372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.300711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.300740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.301012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.301217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.301251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.301595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.301881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.301909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.302135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.302362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.302391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.302697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.303063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.303093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.303471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.303803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.303832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.692 qpair failed and we were unable to recover it. 00:32:12.692 [2024-06-11 15:17:31.304107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.304317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.692 [2024-06-11 15:17:31.304346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.304567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.304848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.304876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.305216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.305526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.305555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.305768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.306132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.306162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.306466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.306857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.306887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.307127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.307412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.307440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.307672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.308021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.308057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.308344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.308628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.308658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.308972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.309258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.309289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.309582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.309861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.309890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.310261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.310486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.310515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.310833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.311198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.311228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.311504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.311813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.311842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.312129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.312412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.312441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.312716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.313082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.313111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.313475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.313744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.313774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.314019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.314389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.314419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.314789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.315068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.315099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.315369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.315657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.315685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.316059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.316399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.316428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.316713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.316923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.316954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.317229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.317499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.317529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.317826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.318055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.318086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.693 qpair failed and we were unable to recover it. 00:32:12.693 [2024-06-11 15:17:31.318454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.693 [2024-06-11 15:17:31.318681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.318709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.318938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.319316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.319346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.319657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.319888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.319917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.320284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.320577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.320606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.320893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.321181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.321211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.321521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.321800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.321829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.322103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.322416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.322445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.322800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.323167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.323198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.323567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.323904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.323933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.324144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.324511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.324541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.324837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.325174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.325204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.325437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.325840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.325869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.326186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.326522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.326551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.326866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.327151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.327182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.327423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.327720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.327750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.328119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.328389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.328418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.328699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.328977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.329006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.329302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.329571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.329600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.329942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.330227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.330258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.330571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.330861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.330890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.331259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.331538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.331566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.331856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.332143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.332173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.332488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.332702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.332731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.332964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.333281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.333312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.333593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.333927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.333956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.334249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.334609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.334638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.334948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.335219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.335249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.335599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.335906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.335935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.336299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.336568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.336598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.694 qpair failed and we were unable to recover it. 00:32:12.694 [2024-06-11 15:17:31.336892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.694 [2024-06-11 15:17:31.337165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.337196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.337565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.337850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.337879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.338242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.338455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.338484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.338827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.339093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.339124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.339369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.339649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.339678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.339892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.340231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.340261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.340498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.340774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.340804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.340968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.341379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.341409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.341714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.342074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.342104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.342448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.342733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.342762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.343071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.343426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.343454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.343768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.344039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.344069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.344445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.344807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.344838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.345116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.345500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.345529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.345853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.346137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.346166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.346469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.346749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.346778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.347055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.347269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.347298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.347518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.347854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.347882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.348191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.348466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.348494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.348863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.349187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.349217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.349583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.349861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.349890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.350168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.350455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.350484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.350776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.351044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.351075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.351288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.351491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.351520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.351734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.352108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.352138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.352504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.352885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.352913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.353209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.353571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.353600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.353938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.354206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.354236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.695 [2024-06-11 15:17:31.354607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.354970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.695 [2024-06-11 15:17:31.354999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.695 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.355300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.355585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.355614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.355835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.356199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.356245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.356567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.356933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.356963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.357258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.357550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.357578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.357919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.358259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.358289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.358587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.358872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.358902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.359190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.359476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.359505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.359668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.360039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.360070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.360289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.360589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.360618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.360889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.361228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.361258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.361601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.361951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.361980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.362275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.362638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.362667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.362915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.363136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.363165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.363522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.363807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.363837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.364200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.364569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.364598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.364979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.365243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.365274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.365633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.365931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.365961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.366250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.366558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.366587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.366878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.367215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.367244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.367480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.367841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.367870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.368095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.368401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.368431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.368771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.369118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.369148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.369440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.369672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.369701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.369984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.370281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.370311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.370583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.370785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.370813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.371096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.371408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.371437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.371711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.372068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.372097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.372386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.372663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.372692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.696 qpair failed and we were unable to recover it. 00:32:12.696 [2024-06-11 15:17:31.372983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.373345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.696 [2024-06-11 15:17:31.373376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.373773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.374042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.374072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.374439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.374778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.374807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.374959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.375267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.375297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.375575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.375845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.375874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.376215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.376606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.376635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.377002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.377379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.377409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.377774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.378089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.378121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.378466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.378760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.378789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.379074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.379358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.379387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.379603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.379968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.379998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.380227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.380497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.380526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.380815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.381153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.381184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.381477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.381758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.381787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.382154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.382426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.382455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.382747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.382968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.382996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.383295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.383470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.383498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.383840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.384129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.384164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.384510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.384794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.384824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.385111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.385393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.385422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.385713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.385996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.386033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.386307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.386674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.386703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.387006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.387349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.387378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.387747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.388035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.388066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.388355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.388651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.388680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.388982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.389338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.389369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.389607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.389941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.389970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.390258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.390545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.390580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.390976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.391289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.391319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.697 qpair failed and we were unable to recover it. 00:32:12.697 [2024-06-11 15:17:31.391657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.391952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.697 [2024-06-11 15:17:31.391980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.698 qpair failed and we were unable to recover it. 00:32:12.698 [2024-06-11 15:17:31.392272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.698 [2024-06-11 15:17:31.392541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.698 [2024-06-11 15:17:31.392570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.698 qpair failed and we were unable to recover it. 00:32:12.698 [2024-06-11 15:17:31.392864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.698 [2024-06-11 15:17:31.393129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.698 [2024-06-11 15:17:31.393159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.698 qpair failed and we were unable to recover it. 00:32:12.698 [2024-06-11 15:17:31.393533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.698 [2024-06-11 15:17:31.393873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.698 [2024-06-11 15:17:31.393912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.698 qpair failed and we were unable to recover it. 00:32:12.698 [2024-06-11 15:17:31.394316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.698 [2024-06-11 15:17:31.394603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.698 [2024-06-11 15:17:31.394650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.698 qpair failed and we were unable to recover it. 00:32:12.698 [2024-06-11 15:17:31.395043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.698 [2024-06-11 15:17:31.395470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.698 [2024-06-11 15:17:31.395501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.395865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.396178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.396209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.396632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.396917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.396947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.397265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.397542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.397582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.397876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.398168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.398201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.398602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.398891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.398921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.399164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.399467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.399497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.399866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.400089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.400120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.400437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.400747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.400779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.401056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.401341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.401370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.401740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.401957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.401987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.402212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.402437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.402467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.402671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.402946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.402975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.403289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.403668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.403703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.404048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.404277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.404306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.404580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.404862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.404891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.405181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.405401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.405430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.405715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.405986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.406015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.406245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.406528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.406558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.699 [2024-06-11 15:17:31.406840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.407120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.699 [2024-06-11 15:17:31.407151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.699 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.407495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.407781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.407811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.408092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.408306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.408336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.408608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.408880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.408909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.409199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.409471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.409499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.409848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.410219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.410250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.410567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.410848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.410877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.411106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.411391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.411420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.411726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.412002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.412042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.412332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.412541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.412571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.412899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.413244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.413275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.413660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.413992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.414022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.414401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.414690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.414719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.415089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.415302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.415331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.415562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.415897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.415926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.416300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.416585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.416614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.416894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.417168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.417199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.417490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.417805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.417835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.418129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.418425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.418454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.418740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.418956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.418985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.419287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.419646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.419676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.419962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.420148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.420178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.420465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.422567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.422623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.423051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.423278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.423309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.423600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.423875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.423904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.424140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.424352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.424382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.424766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.425092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.425123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.425511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.425828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.425858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.426108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.427289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.427333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.700 qpair failed and we were unable to recover it. 00:32:12.700 [2024-06-11 15:17:31.427714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.428002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.700 [2024-06-11 15:17:31.428044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.429627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.429958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.429992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.430305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.430516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.430547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.430890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.431126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.431158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.431398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.431793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.431823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.432062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.432360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.432390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.433532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.433919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.433951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.434234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.434543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.434574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.434867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.435078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.435109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.435404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.435687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.435717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.436087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.436314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.436343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.436580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.436801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.436830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.437059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.437443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.437472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.437705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.437931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.437960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.438311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.438539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.438568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.438822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.439121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.439152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.439449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.439813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.439842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.440090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.440359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.440389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.440666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.440876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.440906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.441252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.441597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.441627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.441912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.442260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.442290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.442575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.442871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.442901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.443126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.443486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.443517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.443748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.444022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.444061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.444409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.444782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.444811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.445035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.445306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.445336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.445629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.445843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.445873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.446237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.446462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.446491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.446776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.447051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.447083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.701 qpair failed and we were unable to recover it. 00:32:12.701 [2024-06-11 15:17:31.447311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.701 [2024-06-11 15:17:31.447587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.447616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.447846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.448068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.448099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.448314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.448516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.448545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.448764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.449045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.449076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.449286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.449508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.449538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.449751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.449898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.449927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.450230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.450447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.450475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.450691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.450906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.450935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.451307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.451516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.451546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.451824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.452108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.452137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.453148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.453380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.453413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.453706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.453989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.454019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.454251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.454469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.454499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.454801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.455082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.455113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.455341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.455647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.455677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.455972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.456179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.456209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.456419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.456615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.456644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a0000b90 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.456984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.457283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.457324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.457620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.457898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.457929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.458208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.458498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.458527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.458758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.458963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.458991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.459206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.459476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.459505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.459788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.460154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.460184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.460419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.460705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.460735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.460942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.461218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.461249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.461567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.461843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.461873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.462153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.462356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.462385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.462662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.463008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.463060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.463345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.463622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.463656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.702 qpair failed and we were unable to recover it. 00:32:12.702 [2024-06-11 15:17:31.463876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.464100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.702 [2024-06-11 15:17:31.464130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.464508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.464872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.464901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.465121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.465420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.465449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.465744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.466047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.466077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.466367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.466645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.466675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.467018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.467242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.467273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.467487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.467694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.467723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.468108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.468453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.468483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.468860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.469143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.469180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.469486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.469825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.469855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.470087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.470364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.470393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.470766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.471043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.471073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.471443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.471714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.471743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.471960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.472262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.472292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.472574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.472914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.472942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.473187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.473480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.473510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.473795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.474176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.474206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.474491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.474721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.474750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.474967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.475239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.475270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.475590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.475927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.475957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.476231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.476506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.476535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.476825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.477043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.477074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.477368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.477675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.477704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.477978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.478392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.478423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.478699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.478922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.703 [2024-06-11 15:17:31.478953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.703 qpair failed and we were unable to recover it. 00:32:12.703 [2024-06-11 15:17:31.479229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.479433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.479462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.479683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.479973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.480002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.480288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.480654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.480683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.481065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.481347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.481376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.481672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.481886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.481915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.482198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.482538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.482567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.482887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.483266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.483296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.483525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.483860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.483889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.484202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.484563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.484592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.484935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.485209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.485239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.485553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.485914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.485943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.486216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.486444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.486473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.486815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.487155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.487186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.487559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.487923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.487952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.488241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.488529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.488559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.488843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.489197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.489226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.489509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.489872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.489901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.490275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.490560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.490589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.490973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.491251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.491282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.491575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.491775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.491805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.492182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.492521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.492550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.492834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.493199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.493229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.493526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.493800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.493829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.494106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.494441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.494471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.494693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.495068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.495099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.495444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.495723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.495752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.496122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.496414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.496444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.496798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.497083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.497114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.497390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.497669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.497697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.704 qpair failed and we were unable to recover it. 00:32:12.704 [2024-06-11 15:17:31.498000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.704 [2024-06-11 15:17:31.498349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.498379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.498654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.499024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.499064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.499354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.499636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.499664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.500043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.500340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.500369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.500648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.500876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.500905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.501192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.501407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.501443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.501732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.502009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.502048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.502420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.502709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.502738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.503038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.503404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.503434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.503656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.503870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.503900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.504295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.504657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.504686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.504907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.505122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.505152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.505459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.505794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.505823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.506097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.506365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.506394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.506715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.507096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.507126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.507401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.507683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.507712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.508016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.508318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.508348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.508718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.509003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.509055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.509346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.509685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.509714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.510001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.510349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.510379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.510615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.510915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.510945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.511232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.511502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.511531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.511824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.512049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.512079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.512372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.512639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.512669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.513043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.513355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.513384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.513663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.513951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.513980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.514284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.514590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.514619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.515005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.515334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.515364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.515725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.516086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.516116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.705 qpair failed and we were unable to recover it. 00:32:12.705 [2024-06-11 15:17:31.516488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.705 [2024-06-11 15:17:31.516772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.516801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.706 qpair failed and we were unable to recover it. 00:32:12.706 [2024-06-11 15:17:31.517214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.517414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.517444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.706 qpair failed and we were unable to recover it. 00:32:12.706 [2024-06-11 15:17:31.517810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.518077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.518108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.706 qpair failed and we were unable to recover it. 00:32:12.706 [2024-06-11 15:17:31.518459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.518826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.518856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.706 qpair failed and we were unable to recover it. 00:32:12.706 [2024-06-11 15:17:31.519161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.519512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.519541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.706 qpair failed and we were unable to recover it. 00:32:12.706 [2024-06-11 15:17:31.519844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.520123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.706 [2024-06-11 15:17:31.520153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.706 qpair failed and we were unable to recover it. 00:32:12.975 [2024-06-11 15:17:31.520507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.975 [2024-06-11 15:17:31.520719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.975 [2024-06-11 15:17:31.520748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.975 qpair failed and we were unable to recover it. 00:32:12.975 [2024-06-11 15:17:31.521073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.975 [2024-06-11 15:17:31.521418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.975 [2024-06-11 15:17:31.521448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.975 qpair failed and we were unable to recover it. 00:32:12.975 [2024-06-11 15:17:31.521764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.975 [2024-06-11 15:17:31.522098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.522128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.522498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.522877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.522906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.523255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.523529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.523558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.523797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.524133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.524163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.524447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.524713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.524741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.525061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.525347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.525376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.525671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.525887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.525915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.526194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.526504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.526534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.526866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.527228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.527259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.527474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.527852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.527883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.528161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.528505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.528534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.528909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.529197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.529227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.529530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.529810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.529839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.530185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.530575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.530604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.530964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.531331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.531361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.531707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.531930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.531960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.532189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.532497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.532526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.532849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.533120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.533151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.533428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.533789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.533818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.534097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.534376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.534410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.534700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.535007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.535047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.535366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.535701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.535730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.536073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.536354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.536384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.536661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.536972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.537002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.537356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.537622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.537651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.537956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.538237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.538267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.538610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.538880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.538910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.539252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.539630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.539659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.539961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.540237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.976 [2024-06-11 15:17:31.540268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.976 qpair failed and we were unable to recover it. 00:32:12.976 [2024-06-11 15:17:31.540578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.540855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.540890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.541167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.541463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.541493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.541878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.542239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.542269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.542556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.542853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.542882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.543118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.543403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.543432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.543780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.544078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.544109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.544335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.544700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.544729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.545073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.545306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.545335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.545646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.545920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.545948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.546293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.546653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.546682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.546975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.547322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.547351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.547583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.547857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.547886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.548200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.548563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.548593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.548981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.549330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.549360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.549748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.550015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.550053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.550282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.550587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.550616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.550984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.551327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.551357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff4b60 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.551723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.551922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.551934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.552241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.552565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.552576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.552749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.552993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.553002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.553254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.553497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.553507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.553834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.554079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.554089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.554397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.554661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.554671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.554944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.555314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.555324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.555574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.555879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.555889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.556187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.556538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.556548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.556874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.557117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.557127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.557310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.557577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.557587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.557832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.558105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.977 [2024-06-11 15:17:31.558115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.977 qpair failed and we were unable to recover it. 00:32:12.977 [2024-06-11 15:17:31.558361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.558666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.558676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.558858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.559182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.559192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.559524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.559817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.559827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.560010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.560245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.560255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.560419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.560611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.560621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.560920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.561220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.561230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.561470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.561767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.561777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.562083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.562308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.562318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.562507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.562691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.562701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.562941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.563188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.563198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.563374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.563697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.563707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.563951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.564285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.564296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.564597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.564776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.564786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.565112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.565435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.565445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.565757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.565930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.565940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.566241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.566566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.566575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.566808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.567100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.567110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.567357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.567621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.567631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.567813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.568039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.568049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.568220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.568563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.568573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.568812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.569049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.569059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.569236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.569497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.569507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.569862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.570191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.570202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.570431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.570661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.570671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.570992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.571260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.571270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.571501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.571727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.571736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.571969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.572226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.572236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.572587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.572831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.572840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.978 qpair failed and we were unable to recover it. 00:32:12.978 [2024-06-11 15:17:31.573191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.573518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.978 [2024-06-11 15:17:31.573528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.573759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.574070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.574080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.574345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.574666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.574676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.574933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.575126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.575137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.575372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.575613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.575623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.575888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.576125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.576135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.576466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.576701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.576711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.577034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.577273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.577283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.577471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.577713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.577723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.577967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.578264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.578274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.578465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.578815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.578826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.579125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.579366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.579376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.579608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.579924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.579934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.580122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.580442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.580452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.580778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.581112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.581122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.581362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.581608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.581618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.581777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.582076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.582086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.582325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.582569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.582579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.582833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.583107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.583117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.583356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.583562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.583573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.583822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.584119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.584129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.584314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.584503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.584513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.584836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.585065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.585075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.585242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.585410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.585420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.585662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.585914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.585924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.586092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.586321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.586330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.586631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.586966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.586976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.587134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.587376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.587385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.979 qpair failed and we were unable to recover it. 00:32:12.979 [2024-06-11 15:17:31.587615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.979 [2024-06-11 15:17:31.587856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.587866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.588096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.588350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.588361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.588620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.588868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.588878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.589204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.589445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.589455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.589628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.589804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.589815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.590135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.590431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.590440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.590679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.590857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.590866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.591061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.591232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.591243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.591557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.591887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.591896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.592131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.592453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.592463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.592730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.593009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.593019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.593214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.593450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.593459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.593811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.594041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.594051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.594350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.594516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.594525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.594828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.595183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.595193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.595434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.595665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.595675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.595998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.596240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.596253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.596439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.596762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.596772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.597000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.597331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.597342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.597522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.597784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.597794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.598051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.598350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.598360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.598677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.598951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.598962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.980 qpair failed and we were unable to recover it. 00:32:12.980 [2024-06-11 15:17:31.599157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.599402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.980 [2024-06-11 15:17:31.599412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.599682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.599909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.599919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.600215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.600442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.600452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.600700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.600936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.600946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.601201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.601442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.601454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.601630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.601927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.601937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.602127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.602309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.602320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.602611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.602843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.602853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.603112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.603354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.603364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.603703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.603938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.603948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.604206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.604502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.604512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.604760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.605061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.605071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.605262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.605583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.605593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.605890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.606057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.606067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.606308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.606492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.606504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.606740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.606998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.607007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.607264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.607497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.607508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.607776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.608083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.608094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.608413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.608738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.608747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.608923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.609190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.609200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.609515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.609838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.609848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.610038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.610277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.610297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.610540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.610780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.610791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.611138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.611275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.611285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.611446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.611690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.611701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.611924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.612252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.612262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.612513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.612838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.612848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.613096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.613323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.613334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.613602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.613872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.613882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.981 qpair failed and we were unable to recover it. 00:32:12.981 [2024-06-11 15:17:31.614153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.981 [2024-06-11 15:17:31.614402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.614412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.614600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.614843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.614853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.615152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.615387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.615398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.615635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.615861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.615870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.616056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.616351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.616361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.616608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.616780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.616789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.617056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.617304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.617314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.617567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.617798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.617808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.618057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.618384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.618394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.618570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.618806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.618815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.619116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.619340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.619350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.619603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.619924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.619933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.620233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.620467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.620476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.620655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.620822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.620832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.620995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.621163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.621173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.621406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.621750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.621759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.622007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.622331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.622341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.622584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.622906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.622916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.623153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.623394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.623404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.623634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.623879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.623888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.624239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.624410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.624420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.624653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.624828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.624837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.625135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.625314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.625324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.625577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.625815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.625825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.626010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.626195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.626205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.626346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.626600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.626610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.982 qpair failed and we were unable to recover it. 00:32:12.982 [2024-06-11 15:17:31.626927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.982 [2024-06-11 15:17:31.627253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.627263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.627503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.627744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.627754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.627996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.628317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.628327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.628625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.628810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.628820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.629147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.629380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.629390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.629688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.629921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.629931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.630173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.630338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.630348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.630652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.630973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.630983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.631224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.631474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.631484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.631712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.632040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.632052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.632225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.632473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.632483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.632836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.633171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.633181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.633479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.633778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.633788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.633958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.634271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.634281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.634588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.634837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.634847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.635194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.635545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.635555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.635869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.636107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.636118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.636414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.636734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.636744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.637056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.637400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.637409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.637651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.637877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.637887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.638122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.638449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.638459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.638770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.639033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.639044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.639323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.639579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.639589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.639773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.639950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.639960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.640240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.640471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.640481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.640671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.640913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.640923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.641198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.641498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.641508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.641808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.642048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.642059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.983 qpair failed and we were unable to recover it. 00:32:12.983 [2024-06-11 15:17:31.642291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.642466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.983 [2024-06-11 15:17:31.642476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.642775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.643004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.643015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.643346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.643536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.643547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.643811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.644057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.644068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.644373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.644610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.644619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.644866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.645108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.645119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.645417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.645659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.645669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.645915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.646148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.646159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.646414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.646586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.646597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.646863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.647021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.647036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.647340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.647519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.647528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.647844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.648093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.648103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.648404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.648699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.648708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.648958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.649249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.649259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.649493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.649717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.649727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.649974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.650166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.650176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.650445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.650688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.650698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.650929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.651110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.651121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.651352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.651658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.651668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.651947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.652192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.652202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.652517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.652822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.652832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.652954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.653254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.653264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.653510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.653772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.653782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.654033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.654393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.654402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.654582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.654823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.654833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.655104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.655399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.655409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.655648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.655974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.655985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.656222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.656584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.656593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.656915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.657237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.984 [2024-06-11 15:17:31.657247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.984 qpair failed and we were unable to recover it. 00:32:12.984 [2024-06-11 15:17:31.657478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.657706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.657716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.658017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.658339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.658349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.658528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.658760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.658770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.659047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.659366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.659376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.659711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.659949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.659959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.660201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.660526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.660537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.660778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.661023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.661039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.661274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.661524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.661534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.661831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.662128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.662138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.662405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.662566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.662576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.662877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.663199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.663209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.663509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.663684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.663693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.664018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.664193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.664203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.664528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.664828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.664837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.665177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.665341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.665351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.665526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.665721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.665731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.666040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.666201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.666211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.666487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.666717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.666727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.667033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.667275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.667285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.667584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.667906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.667916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.668192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.668362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.668371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.668532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.668686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.668696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.668871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.669210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.669220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.985 qpair failed and we were unable to recover it. 00:32:12.985 [2024-06-11 15:17:31.669465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.669735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.985 [2024-06-11 15:17:31.669745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.670043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.670401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.670411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.670740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.671040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.671050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.671244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.671562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.671572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.671747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.671994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.672004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.672245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.672519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.672529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.672772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.672933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.672943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.673262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.673525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.673535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.673856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.674156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.674167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.674439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.674775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.674785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.674960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.675313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.675325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.675573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.675831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.675840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.676084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.676331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.676341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.676580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.676806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.676817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.677178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.677429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.677439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.677793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.678021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.678037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.678285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.678582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.678593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.678891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.679243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.679254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.679498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.679796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.679806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.680049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.680408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.680417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.680696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.680994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.681007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.681187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.681484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.681494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.681668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.682024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.682036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.682266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.682631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.682641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.682868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.683166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.683176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.683523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.683869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.683879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.684175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.684474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.684484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.684722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.684900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.684910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.685209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.685482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.685491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.986 qpair failed and we were unable to recover it. 00:32:12.986 [2024-06-11 15:17:31.685824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.986 [2024-06-11 15:17:31.686123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.686133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.686313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.686609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.686621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.686895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.687140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.687150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.687425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.687668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.687677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.687922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.688248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.688258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.688521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.688771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.688781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.689029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.689278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.689288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.689544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.689841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.689851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.690086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.690280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.690290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.690494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.690721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.690731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.691053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.691308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.691318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.691565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.691909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.691921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.692172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3484945 Killed "${NVMF_APP[@]}" "$@" 00:32:12.987 [2024-06-11 15:17:31.692346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.692356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.692674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 15:17:31 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:32:12.987 [2024-06-11 15:17:31.693000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.693011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.693294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 15:17:31 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:12.987 [2024-06-11 15:17:31.693528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.693539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 15:17:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:12.987 [2024-06-11 15:17:31.693781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 15:17:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:12.987 [2024-06-11 15:17:31.693960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.693971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 15:17:31 -- common/autotest_common.sh@10 -- # set +x 00:32:12.987 [2024-06-11 15:17:31.694199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.694536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.694547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.694794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.695037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.695047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.695291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.695554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.695563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.695862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.696044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.696054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.696408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.696747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.696759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.696991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.697295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.697305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.697602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.697901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.697911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.698214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.698540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.698550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.698817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.699141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.699151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.699432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.699600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.699610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-06-11 15:17:31.699783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.700014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.700023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 15:17:31 -- nvmf/common.sh@469 -- # nvmfpid=3485913 00:32:12.987 [2024-06-11 15:17:31.700273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 15:17:31 -- nvmf/common.sh@470 -- # waitforlisten 3485913 00:32:12.987 [2024-06-11 15:17:31.700553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.987 [2024-06-11 15:17:31.700563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 15:17:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:12.988 [2024-06-11 15:17:31.700750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 15:17:31 -- common/autotest_common.sh@819 -- # '[' -z 3485913 ']' 00:32:12.988 [2024-06-11 15:17:31.701069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.701080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 15:17:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.988 [2024-06-11 15:17:31.701187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 15:17:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:12.988 [2024-06-11 15:17:31.701428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.701439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.701621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 15:17:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.988 15:17:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:12.988 [2024-06-11 15:17:31.701973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.701983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 15:17:31 -- common/autotest_common.sh@10 -- # set +x 00:32:12.988 [2024-06-11 15:17:31.702215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.702406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.702416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.702658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.702982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.702991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.703323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.703587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.703597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.703902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.704075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.704085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.704253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.704581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.704590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.704833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.705131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.705141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.705388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.705713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.705723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.705953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.706078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.706091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.706339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.706536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.706547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.706781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.707041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.707052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.707245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.707416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.707427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.707606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.707787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.707798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.708053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.708356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.708368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.708603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.708836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.708845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.709098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.709290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.709299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.709541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.709700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.709710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.710059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.710384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.710394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.710648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.710977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.710989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.711288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.711543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.711553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-06-11 15:17:31.711751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.988 [2024-06-11 15:17:31.711934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.711945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.712243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.712485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.712495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.712669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.712993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.713003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.713327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.713575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.713584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.713823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.714076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.714086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.714387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.714565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.714574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.714803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.715075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.715085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.715324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.715581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.715593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.715869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.716001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.716011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.716346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.716594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.716605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.716788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.717034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.717045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.717369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.717607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.717616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.717945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.718198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.718208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.718443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.718620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.718631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.718959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.719230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.719241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.719425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.719594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.719605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.719843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.720086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.720098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.720332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.720632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.720643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.720878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.721186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.721197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.721445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.721625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.721636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.721949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.722138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.722149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.722274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.722391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.722403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.722581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.722905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.722915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.723147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.723392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.723402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.723633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.723958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.723968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.724200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.724366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.724376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-06-11 15:17:31.724675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.989 [2024-06-11 15:17:31.724855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.724865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.725116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.725290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.725300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.725484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.725815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.725826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.726013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.726343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.726354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.726607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.726735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.726746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.727010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.727243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.727254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.727431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.727664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.727675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.727866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.728115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.728125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.728300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.728602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.728613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.728860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.729046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.729057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.729325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.729490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.729501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.729802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.730130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.730141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.730471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.730717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.730727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.730845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.731095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.731107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.731432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.731608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.731618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.731730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.731896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.731907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.732158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.732338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.732348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.732584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.732845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.732855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.733049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.733206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.733216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.733518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.733761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.733771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.734044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.734301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.734312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.734613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.734797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.734807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.734988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.735154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.735165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.735415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.735524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.735534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.735856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.736083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.736094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.736273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.736627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.736638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.736878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.736987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.736997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.990 qpair failed and we were unable to recover it. 00:32:12.990 [2024-06-11 15:17:31.737324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.737649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.990 [2024-06-11 15:17:31.737659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.737842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.738022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.738038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.738286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.738454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.738465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.738784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.738976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.738986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.739164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.739466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.739477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.739739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.740034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.740045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.740381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.740678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.740690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.740868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.741035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.741047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.741376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.741551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.741561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.741760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.742010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.742021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.742207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.742451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.742461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.742691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.742957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.742968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.743174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.743427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.743438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.743685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.743916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.743926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.744180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.744478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.744489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.744731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.744923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.744933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.745200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.745527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.745536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.745776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.745890] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:12.991 [2024-06-11 15:17:31.745942] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.991 [2024-06-11 15:17:31.746010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.746021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.746353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.746655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.746664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.746963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.747205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.747215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.747511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.747811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.747821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.747993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.748177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.748188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.748486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.748729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.748739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.748994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.749338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.749348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.749616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.749798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.749807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.750151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.750399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.750409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.750584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.750740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.750750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.751075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.751374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.751383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.991 [2024-06-11 15:17:31.751619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.751941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.991 [2024-06-11 15:17:31.751951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.991 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.752215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.752472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.752482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.752715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.753012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.753021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.753246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.753493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.753503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.753820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.754048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.754059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.754287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.754633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.754643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.754874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.755171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.755182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.755420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.755652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.755661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.755984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.756222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.756232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.756488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.756732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.756742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.756918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.757169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.757179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.757411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.757715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.757725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.757991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.758257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.758267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.758516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.758694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.758704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.758936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.759180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.759191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.759381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.759554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.759564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.759760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.759998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.760007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.760309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.760553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.760567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.760827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.761066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.761077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.761376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.761679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.761689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.761921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.762220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.762231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.762457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.762708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.762719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.762960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.763193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.763204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.763443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.763757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.763767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.764016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.764261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.764272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.764513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.764688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.764698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.764871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.765145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.765156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.765396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.765647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.765659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.765967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.766217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.766228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.992 [2024-06-11 15:17:31.766517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.766688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.992 [2024-06-11 15:17:31.766699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.992 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.767002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.767244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.767254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.767520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.767763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.767774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.768102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.768368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.768378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.768555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.768793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.768802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.769118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.769297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.769308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.769542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.769736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.769746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.770072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.770372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.770382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.770562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.770738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.770750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.771019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.771374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.771384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.771632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.771959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.771968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.772211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.772394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.772405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.772590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.772914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.772924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.773095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.773426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.773437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.773614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.773855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.773865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.774034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.774209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.774221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.774414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.774658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.774668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.774967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.775221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.775231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.775418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.775678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.775691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.775932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.776162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.776174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.776353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.776591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.776601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.776846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.777002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.777011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.777316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.777615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.777625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.777926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.778225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.778236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.778514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.778755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.778765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.779065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.779380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.779391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.779691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.779936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.779957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.780140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.780388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.780398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.780633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.780827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.993 [2024-06-11 15:17:31.780837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.993 qpair failed and we were unable to recover it. 00:32:12.993 [2024-06-11 15:17:31.781139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.781415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.781424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.781660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.781982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.781992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.782299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.782544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.782554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.782880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.783056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.783066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.783302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.783600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.783610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.783936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.784070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.784080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.784405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.784741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.784750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.784920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.785155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.785166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.785353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.785678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.785688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.785922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.786093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.786103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.786377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.786553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.786563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.786672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.786996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.787007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.787199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.787501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.787510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.787837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.994 [2024-06-11 15:17:31.788065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.788075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.788373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.788650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.788660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.789030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.789221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.789232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.789399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.789553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.789562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.789812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.790144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.790155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.790319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.790645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.790655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.790900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.791161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.791172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.791382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.791486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.791496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.791762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.792002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.792012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.792194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.792478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.792488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.792790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.792983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.792994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.994 qpair failed and we were unable to recover it. 00:32:12.994 [2024-06-11 15:17:31.793236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.994 [2024-06-11 15:17:31.793491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.793501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.793736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.793976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.793986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.794159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.794405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.794415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.794732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.794907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.794917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.795092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.795278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.795287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.795461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.795783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.795792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.796053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.796304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.796315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.796564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.796808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.796818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.797053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.797245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.797255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.797419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.797591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.797601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.797840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.798094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.798104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.798352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.798684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.798695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.798933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.799183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.799194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.799441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.799695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.799705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.799943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.800204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.800215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.800543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.800817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.800828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.801067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.801310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.801320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.801571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.801683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.801694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.802024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.802287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.802297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.802479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.802745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.802755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.802920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.803159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.803169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.803414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.803720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.803731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.804034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.804357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.804368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.804617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.804944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.804955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.805209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.805443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.805455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.805678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.805943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.805955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:12.995 [2024-06-11 15:17:31.806194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.806470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.995 [2024-06-11 15:17:31.806482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:12.995 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.806792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.807143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.807155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.807486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.807683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.807694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.808001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.808316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.808328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.808587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.808775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.808786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.809139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.809371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.809383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.809660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.809994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.810006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.810117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.810349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.810359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.810681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.810976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.810985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.811244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.811577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.811587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.811892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.812083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.812093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.812368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.812542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.812551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.812796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.812977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.812987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.813213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.813456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.813467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.813796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.814045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.814055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.814372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.814612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.814622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.814795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.815029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.815039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.815383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.815556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.815566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.815895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.816127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.816137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.816390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.816651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.816661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.816904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.817180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.266 [2024-06-11 15:17:31.817190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.266 qpair failed and we were unable to recover it. 00:32:13.266 [2024-06-11 15:17:31.817544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.817842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.817851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.818043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.818313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.818322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.818565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.818670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.818679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.818911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.819174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.819183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.819421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.819611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.819619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.819783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.820020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.820032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.820226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.820412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.820421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.820603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.820859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.820868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.821167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.821462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.821471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.821774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.822014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.822023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.822210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.822383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.822392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.822586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.822874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.822883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.823063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.823235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.823245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.823558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.823819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.823828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.824096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.824361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.824370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.824562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.824728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.824737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.824980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.825181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.825190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.825355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.825594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.825603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.825786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.825952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.825962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.826166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.826335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.826344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.826521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.826764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.826773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.826956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.827185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.827194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.827422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.827681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.827690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.827938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.828174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.828183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.828433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.828614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.828623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.828796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.829094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.829103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.829227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.829397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.829406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.829564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.829740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.829749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.267 qpair failed and we were unable to recover it. 00:32:13.267 [2024-06-11 15:17:31.829920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.267 [2024-06-11 15:17:31.830217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.830226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.830501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.830690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.830699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.830946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.831069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.831078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.831241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.831399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.831408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.831648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.831880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.831889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.832140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.832328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.832338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.832500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.832664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.832673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.832850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.833020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.833036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.833215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.833454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.833464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.833641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.833816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.833825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.834071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.834302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.834312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.834625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.834861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.834870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.835034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.835280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.835288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.835477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.835653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.835663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.835850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.835963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.835972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.836269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.836436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.836445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.836616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.836855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.836865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.837066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.837228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.837237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.837467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.837630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.837639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.837870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.838048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.838057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.838358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.838596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.838605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.838771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.838936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.838944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.839209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.839456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.839465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.839702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.839865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.839874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.840194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.840284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:13.268 [2024-06-11 15:17:31.840375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.840384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.840556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.840792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.840802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.841041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.841217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.841226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.841550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.841780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.841791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.268 [2024-06-11 15:17:31.842031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.842261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.268 [2024-06-11 15:17:31.842271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.268 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.842503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.842734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.842744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.842916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.843097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.843107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.843295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.843564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.843573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.843752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.843923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.843932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.844108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.844302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.844312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.844557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.844812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.844822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.845070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.845234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.845244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.845509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.845691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.845700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.845869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.845996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.846006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.846180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.846355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.846365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.846592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.846907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.846918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.847147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.847491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.847501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.847687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.847986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.847996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.848159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.848436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.848446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.848603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.848952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.848962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.849212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.849455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.849464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.849695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.849935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.849946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.850128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.850303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.850313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.850486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.850640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.850650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.850831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.851004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.851014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.851187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.851487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.851497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.851671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.851903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.851912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.852088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.852331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.852341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.852505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.852743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.852753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.853022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.853215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.853224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.853323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.853482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.853492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.853657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.853883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.853893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.269 [2024-06-11 15:17:31.854123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.854282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.269 [2024-06-11 15:17:31.854291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.269 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.854541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.854698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.854707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.854940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.855240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.855249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.855496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.855666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.855675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.856000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.856263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.856273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.856523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.856762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.856772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.857076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.857320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.857330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.857599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.857787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.857796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.857987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.858170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.858180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.858441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.858611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.858620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.858958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.859202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.859212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.859381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.859620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.859630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.859816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.859999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.860008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.860357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.860462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.860471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.860654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.860893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.860903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.861098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.861339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.861350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.861545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.861739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.861748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.861980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.862213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.862223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.862503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.862677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.862686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.862865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.863022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.863036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.863297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.863487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.863496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.863812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.864069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.864079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.864321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.864590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.864599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.270 qpair failed and we were unable to recover it. 00:32:13.270 [2024-06-11 15:17:31.864965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.270 [2024-06-11 15:17:31.865196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.865207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.865381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.865715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.865725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.866053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.866286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.866297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.866542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.866720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.866730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.866916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.867091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.867102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.867349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.867586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.867595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.867796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.867993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.868003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.868244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.868468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.868478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.868646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.868903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.868912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.869239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.869490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.869499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.869820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.870064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.870074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.870318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.870614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.870622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.870794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.870978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.870990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.871231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.871407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.871417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.871583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.871767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.871777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.872039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.872249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.872259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.872556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.872851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.872860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.873096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.873331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.873341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.873581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.873774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.873783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.874772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.874963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.874976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.875213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.876008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.876035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.876309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.876553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.876563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.877387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.877656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.877671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.878483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.878736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.878748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.878943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.879190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.879200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.879437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.879607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.879617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.879780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.880014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.880030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.880272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.880453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.880462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.880642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.880823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.271 [2024-06-11 15:17:31.880832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.271 qpair failed and we were unable to recover it. 00:32:13.271 [2024-06-11 15:17:31.880999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.881180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.881190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.881353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.881522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.881532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.881697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.881940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.881950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.882137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.882311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.882322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.882587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.882743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.882753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.882989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.883301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.883312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.883540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.883732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.883741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.883905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.884149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.884159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.884427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.884669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.884678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.884920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.885163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.885174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.885359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.885516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.885526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.885740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.886040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.886050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.886282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.886448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.886459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.886640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.886893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.886903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.887163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.887337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.887346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.887667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.887837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.887848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.888042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.888276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.888285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.888581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.888752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.888762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.889044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.889224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.889234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.889467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.889642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.889653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.889811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.890039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.890049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.890226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.890412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.890422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.890640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.890814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.890825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.890982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.891156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.891167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.891493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.891793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.891803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.892093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.892417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.892426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.892614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.892972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.892981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.893234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.893482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.893492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.272 qpair failed and we were unable to recover it. 00:32:13.272 [2024-06-11 15:17:31.893742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.893916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.272 [2024-06-11 15:17:31.893926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.894169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.894347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.894355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.894585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.894936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.894945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.895184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.895426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.895435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.895615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.895793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.895803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.895965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.896233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.896244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.896413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.896670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.896680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.896941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.897171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.897182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.897414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.897638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.897648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.897809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.898036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.898046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.898290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.898467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.898476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.898652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.898832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.898841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.899101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.899273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.899283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.899511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.899702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.899722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.899952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.900276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.900286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.900547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.900874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.900883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.901068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.901180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.901189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.901529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.901710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.901720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.901996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.902310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.902320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.902496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.902676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.902685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.903041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.903338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.903347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.903581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.903809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.903820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.904021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.904285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.904295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.904532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.904792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.904803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.905123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.905329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.905339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.905523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.905703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.905713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.905895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.906195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.906205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.906392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.906556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.906566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.273 qpair failed and we were unable to recover it. 00:32:13.273 [2024-06-11 15:17:31.906828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.273 [2024-06-11 15:17:31.907076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.907087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.907264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.907384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.907394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.907626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.907887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.907897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.908137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.908377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.908387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.908699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.908960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.908971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.909215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.909594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.909604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.909851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.910113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.910123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.910351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.910580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.910591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.910833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.911148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.911158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.911424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.911653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.911664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.911898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.912072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.912082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.912331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.912583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.912593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.912899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.913150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.913161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.913458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.913698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.913708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.913976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.914087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.914097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.914273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.914532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.914543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.914807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.914995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.915005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.915254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.915581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.915591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.915845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.916102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.916112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.916461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.916698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.916708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.917007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.917289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.917309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.917606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.917923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.917933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.918231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.918484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.918494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.918723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.918899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.918908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.919262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.919445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.919455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.919684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.919849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.919859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.920014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.920249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.920259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.920444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.920795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.920805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.920981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.921155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.274 [2024-06-11 15:17:31.921166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.274 qpair failed and we were unable to recover it. 00:32:13.274 [2024-06-11 15:17:31.921339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.921499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.921508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.921739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.921993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.922003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.922301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.922541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.922550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.922784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.923036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.923045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.923288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.923451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.923460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.923639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.923824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.923833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.923940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.924053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.924062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.924312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.924562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.924571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.924800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.925045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.925054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.925380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.925737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.925746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.926055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.926207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.926218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.926390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.926648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:13.275 [2024-06-11 15:17:31.926685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.926694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.926779] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.275 [2024-06-11 15:17:31.926791] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.275 [2024-06-11 15:17:31.926801] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.275 [2024-06-11 15:17:31.926935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.927183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.927106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:13.275 [2024-06-11 15:17:31.927194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.927134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:13.275 [2024-06-11 15:17:31.927275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:13.275 [2024-06-11 15:17:31.927369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.927275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:13.275 [2024-06-11 15:17:31.927702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.927711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.927906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.928090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.928099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.928343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.928494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.928503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.928837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.929092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.929101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.929366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.929607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.929616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.929857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.930034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.930044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.930160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.930254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.930263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.930589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.930765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.930774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.931055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.931239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.931249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.275 qpair failed and we were unable to recover it. 00:32:13.275 [2024-06-11 15:17:31.931576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.275 [2024-06-11 15:17:31.931849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.931858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.932180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.932329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.932338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.932585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.932828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.932836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.933068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.933307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.933316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.933473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.933647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.933656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.933835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.934017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.934029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.934223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.934546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.934556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.934827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.935005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.935014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.935311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.935477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.935486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.935652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.935889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.935898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.936084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.936253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.936263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.936491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.936788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.936799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.936953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.937195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.937205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.937383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.937608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.937618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.937895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.938132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.938142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.938385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.938659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.938670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.938904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.939136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.939147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.939406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.939640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.939651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.939836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.940078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.940089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.940349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.940598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.940607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.940833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.941069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.941079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.941328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.941627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.941637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.941820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.941980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.941990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.942287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.942516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.942525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.942821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.943124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.943135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.943392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.943661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.943672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.943919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.944165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.944175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.944362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.944604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.944614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.944847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.945170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.276 [2024-06-11 15:17:31.945180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.276 qpair failed and we were unable to recover it. 00:32:13.276 [2024-06-11 15:17:31.945454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.945695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.945705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.945951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.946275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.946286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.946540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.946732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.946743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.947024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.947261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.947271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.947618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.947848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.947858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.948165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.948405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.948415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.948640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.948882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.948896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.949219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.949555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.949565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.949839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.950144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.950155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.950407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.950761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.950771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.951123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.951475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.951486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.951744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.952017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.952029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.952282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.952579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.952589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.952934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.953264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.953275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.953574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.953875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.953885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.954159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.954396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.954406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.954593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.954889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.954902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.955149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.955502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.955511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.955872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.956134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.956143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.956416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.956740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.956750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.957049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.957303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.957313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.957646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.957926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.957936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.958177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.958487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.958498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.958689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.959019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.959034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.959365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.959660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.959670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.959848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.960173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.960184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.960484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.960790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.960804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.961113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.961389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.961398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.277 qpair failed and we were unable to recover it. 00:32:13.277 [2024-06-11 15:17:31.961660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.961904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.277 [2024-06-11 15:17:31.961913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.962096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.962420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.962430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.962623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.962807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.962816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.963143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.963467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.963476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.963802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.964099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.964111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.964435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.964672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.964681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.965012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.965197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.965208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.965522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.965846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.965854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.966175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.966371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.966382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.966682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.967004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.967013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.967360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.967687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.967695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.968053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.968410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.968419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.968611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.968937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.968946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.969274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.969459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.969467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.969773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.970130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.970139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.970485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.970806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.970815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.971156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.971398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.971407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.971739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.971911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.971920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.972161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.972514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.972524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.972817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.973144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.973153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.973478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.973806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.973815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.974083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.974428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.974437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.974740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.975036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.975046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.975401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.975565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.975575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.975914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.976234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.976244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.976545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.976877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.976887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.977151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.977477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.977489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.977729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.978054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.978065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.978388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.978745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.978755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.278 qpair failed and we were unable to recover it. 00:32:13.278 [2024-06-11 15:17:31.979109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.278 [2024-06-11 15:17:31.979410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.979419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.979745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.980067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.980076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.980397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.980580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.980589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.980841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.981174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.981184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.981513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.981824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.981834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.982158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.982404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.982413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.982663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.982998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.983007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.983237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.983561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.983571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.983907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.984241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.984251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.984516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.984800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.984809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.985135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.985314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.985323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.985580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.985959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.985968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.986297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.986619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.986629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.986899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.987170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.987180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.987441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.987745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.987754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.988085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.988381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.988390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.988688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.989010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.989019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.989296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.989617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.989626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.989952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.990224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.990233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.990548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.990875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.990884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.991157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.991452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.991461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.991785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.992129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.992138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.992460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.992783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.992792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.993089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.993334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.993343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.993641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.993956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.993965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.994287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.994555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.994564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.994894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.995144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.995153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.995467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.995814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.995823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.996154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.996515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.279 [2024-06-11 15:17:31.996524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.279 qpair failed and we were unable to recover it. 00:32:13.279 [2024-06-11 15:17:31.996875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:31.997222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:31.997231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:31.997498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:31.997742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:31.997751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:31.998095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:31.998420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:31.998430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:31.998757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:31.999000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:31.999009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:31.999333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:31.999596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:31.999604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:31.999796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.000094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.000103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.000414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.000761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.000769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.001096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.001365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.001373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.001695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.001989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.001997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.002297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.002646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.002655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.003007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.003362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.003371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.003605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.003964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.003972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.004201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.004497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.004506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.004747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.005072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.005081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.005377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.005673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.005682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.006011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.006394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.006404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.006652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.006961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.006969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.007169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.007496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.007505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.007769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.008071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.008080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.008347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.008575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.008584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.008833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.009070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.009079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.009442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.009809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.009818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.010169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.010397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.010407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.010603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.010845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.280 [2024-06-11 15:17:32.010854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.280 qpair failed and we were unable to recover it. 00:32:13.280 [2024-06-11 15:17:32.011183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.011430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.011438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.011703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.012034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.012043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.012364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.012598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.012608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.012941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.013258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.013268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.013589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.013914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.013923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.014222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.014493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.014502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.014854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.015205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.015215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.015470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.015806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.015815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.016065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.016367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.016375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.016710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.017036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.017046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.017423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.017752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.017761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.018060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.018373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.018382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.018719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.019084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.019093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.019365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.019638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.019647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.019990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.020239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.020248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.020572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.020895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.020903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.021176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.021500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.021509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.021833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.022180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.022189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.022364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.022605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.022614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.022843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.023091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.023100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.023450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.023745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.023754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.023997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.024294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.024303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.024652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.024888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.024897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.025202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.025474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.025483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.025738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.026053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.026062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.026328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.026651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.026660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.026982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.027304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.027313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.027562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.027810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.027819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.028147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.028391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.281 [2024-06-11 15:17:32.028400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.281 qpair failed and we were unable to recover it. 00:32:13.281 [2024-06-11 15:17:32.028694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.029038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.029047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.029405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.029702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.029710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.030040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.030384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.030393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.030652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.030958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.030967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.031268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.031603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.031612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.031921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.032242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.032251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.032575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.032871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.032880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.033207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.033550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.033559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.033869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.034168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.034177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.034502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.034770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.034778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.035047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.035296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.035305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.035544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.035895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.035904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.036256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.036605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.036614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.036929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.037186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.037195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.037524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.037807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.037816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.038063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.038240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.038248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.038493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.038751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.038759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.039091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.039346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.039355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.039661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.039960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.039969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.040262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.040584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.040592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.040928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.041301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.041311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.041552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.041851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.041860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.042188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.042416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.042424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.042690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.043031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.043040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.043300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.043627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.043636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.043958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.044234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.044243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.044545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.044870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.282 [2024-06-11 15:17:32.044879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.282 qpair failed and we were unable to recover it. 00:32:13.282 [2024-06-11 15:17:32.045188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.045433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.045442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.045789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.046043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.046053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.046307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.046631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.046639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.046869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.047105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.047114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.047413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.047675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.047684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.047954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.048300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.048309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.048631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.048801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.048810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.049147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.049478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.049487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.049836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.050079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.050088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.050317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.050661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.050670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.050971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.051213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.051221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.051567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.051810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.051822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.052096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.052324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.052333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.052562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.052887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.052896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.053127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.053453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.053461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.053815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.054142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.054150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.054473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.054793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.054802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.055101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.055276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.055285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.055582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.055810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.055819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.056131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.056427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.056435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.056766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.057011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.057021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.057267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.057616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.057628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.057978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.058330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.058339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.058641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.058982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.058991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.059318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.059589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.059598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.059940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.060244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.283 [2024-06-11 15:17:32.060254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.283 qpair failed and we were unable to recover it. 00:32:13.283 [2024-06-11 15:17:32.060548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.060813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.060822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.061125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.061374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.061383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.061557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.061857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.061865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.062190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.062416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.062425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.062672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.062968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.062976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.063276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.063574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.063584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.063852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.064179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.064188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.064509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.064749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.064758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.065060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.065407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.065416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.065665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.065958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.065967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.066262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.066585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.066594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.066983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.067225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.067234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.067549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.067901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.067910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.068262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.068419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.068428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.068752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.069047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.069056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.069352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.069622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.069630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.069957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.070196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.070206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.070535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.070831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.070840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.071067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.071378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.071387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.071737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.072033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.072043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.072350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.072675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.072683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.072951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.073252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.073261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.284 [2024-06-11 15:17:32.073528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.073855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.284 [2024-06-11 15:17:32.073864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.284 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.074164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.074458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.074466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.074816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.075114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.075123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.075454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.075778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.075787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.076113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.076437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.076445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.076694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.076947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.076955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.077313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.077662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.077670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.077970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.078266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.078275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.078606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.078942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.078951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.079196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.079442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.079451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.079805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.080127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.080136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.080367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.080714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.080722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.080968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.081316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.081325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.081621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.081941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.081950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.082196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.082525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.082533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.082855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.083126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.083135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.083429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.083752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.083760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.084084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.084297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.084305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.084553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.084792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.084800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.085118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.085415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.085424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.085689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.085921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.085930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.086229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.086526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.086535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.086862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.087181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.087190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.087489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.087718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.087727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.087956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.088280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.088289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.088616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.088942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.088951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.089251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.089548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.089557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.089818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.090126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.090135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.090363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.090659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.090668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.285 qpair failed and we were unable to recover it. 00:32:13.285 [2024-06-11 15:17:32.090905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.091225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.285 [2024-06-11 15:17:32.091235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.286 qpair failed and we were unable to recover it. 00:32:13.286 [2024-06-11 15:17:32.091531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.091831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.091839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.286 qpair failed and we were unable to recover it. 00:32:13.286 [2024-06-11 15:17:32.092165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.092512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.092520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.286 qpair failed and we were unable to recover it. 00:32:13.286 [2024-06-11 15:17:32.092776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.093102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.093111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.286 qpair failed and we were unable to recover it. 00:32:13.286 [2024-06-11 15:17:32.093427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.093670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.093679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.286 qpair failed and we were unable to recover it. 00:32:13.286 [2024-06-11 15:17:32.094005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.094248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.094257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.286 qpair failed and we were unable to recover it. 00:32:13.286 [2024-06-11 15:17:32.094514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.094846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.094854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.286 qpair failed and we were unable to recover it. 00:32:13.286 [2024-06-11 15:17:32.095226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.095490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.095499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.286 qpair failed and we were unable to recover it. 00:32:13.286 [2024-06-11 15:17:32.095797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.096121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.096130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.286 qpair failed and we were unable to recover it. 00:32:13.286 [2024-06-11 15:17:32.096453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.096755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.286 [2024-06-11 15:17:32.096763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.286 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.097064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.097310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.097319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.097562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.097795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.097803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.098046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.098339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.098347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.098654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.098829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.098838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.099156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.099482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.099491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.099763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.100089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.100098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.100420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.100718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.100727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.101054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.101283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.101292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.101533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.101856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.101864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.102161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.102455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.102464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.102795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.103092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.103101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.103426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.103616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.103625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.103868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.104217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.104226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.104549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.104850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.104859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.105154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.105477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.554 [2024-06-11 15:17:32.105486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.554 qpair failed and we were unable to recover it. 00:32:13.554 [2024-06-11 15:17:32.105730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.106019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.106035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.106324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.106623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.106631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.106956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.107287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.107296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.107617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.107862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.107871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.108117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.108353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.108362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.108611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.108847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.108856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.109206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.109532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.109541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.109787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.110137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.110146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.110416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.110740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.110748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.111071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.111255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.111264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.111568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.111918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.111926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.112222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.112548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.112556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.112882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.113120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.113129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.113392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.113721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.113730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.114072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.114325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.114334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.114569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.114923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.114932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.115284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.115635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.115644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.115946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.116291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.116300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.116626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.116950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.116959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.117196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.117520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.117528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.117765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.118094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.118103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.118428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.118697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.118705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.118964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.119291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.119300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.119625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.119897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.119906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.120209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.120534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.120543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.120884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.121247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.121256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.121504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.121849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.121857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.122179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.122426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.122434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.122699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.122964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.122973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.123281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.123565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.123573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.123895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.124198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.124207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.124531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.124792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.124800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.125078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.125420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.125429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.125667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.125994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.126003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.126249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.126512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.126521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.126789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.127118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.127127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.127377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.127687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.127696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.128029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.128304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.128313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.128586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.128906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.128915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.129159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.129481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.129489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.555 qpair failed and we were unable to recover it. 00:32:13.555 [2024-06-11 15:17:32.129783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.130015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.555 [2024-06-11 15:17:32.130030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.130347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.130676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.130685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.130957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.131280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.131289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.131475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.131740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.131749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.131973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.132295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.132304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.132687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.132930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.132939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.133288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.133521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.133530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.133796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.133969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.133977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.134329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.134665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.134674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.134996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.135327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.135336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.135710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.136005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.136015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.136347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.136668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.136677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.137002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.137264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.137273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.137600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.137921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.137930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.138254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.138553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.138561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.138921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.139193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.139202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.139522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.139846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.139855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.140118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.140420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.140429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.140779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.141103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.141113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.141437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.141786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.141795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.142154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.142387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.142399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.142660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.142925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.142934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.143261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.143507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.143515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.143832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.144160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.144169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.144429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.144732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.144741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.145062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.145381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.145390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.145709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.146040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.146049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.146376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.146751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.146760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.147094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.147419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.147428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.147749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.148103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.148112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.148340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.148658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.148668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.148936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.149115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.149124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.149353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.149671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.149680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.149870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.150099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.150108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.150409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.150691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.150700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.150936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.151178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.151187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.151538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.151860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.151868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.152141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.152489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.152498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.152725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.153049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.153058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.153371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.153599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.153608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.153912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.154186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.154197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.556 qpair failed and we were unable to recover it. 00:32:13.556 [2024-06-11 15:17:32.154523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.154869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.556 [2024-06-11 15:17:32.154878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.155181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.155366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.155374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.155711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.155969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.155978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.156289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.156612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.156621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.156877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.157187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.157196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.157521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.157850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.157859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.158099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.158419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.158428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.158776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.159137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.159146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.159445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.159767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.159776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.160075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.160397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.160406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.160680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.161028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.161037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.161283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.161513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.161521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.161877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.162173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.162182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.162512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.162810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.162819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.163075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.163400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.163408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.163674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.163975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.163984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.164312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.164573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.164581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.164883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.165204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.165213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.165508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.165827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.165836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.166032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.166222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.166231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.166464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.166760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.166768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.167073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.167416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.167425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.167750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.168098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.168107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.168486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.168784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.168793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.169123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.169445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.169454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.169680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.169923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.169931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.170228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.170469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.170478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.170796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.171125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.171133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.171398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.171724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.171733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.172057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.172298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.172307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.172608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.172848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.172857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.173171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.173415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.173424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.173758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.174080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.174089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.174359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.174657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.174665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.174988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.175335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.557 [2024-06-11 15:17:32.175345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.557 qpair failed and we were unable to recover it. 00:32:13.557 [2024-06-11 15:17:32.175593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.175835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.175844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.176092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.176390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.176399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.176638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.176958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.176967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.177292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.177614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.177622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.177865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.178041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.178050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.178353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.178648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.178657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.178919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.179217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.179226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.179538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.179873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.179881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.180180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.180505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.180513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.180763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.181020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.181035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.181334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.181592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.181600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.181927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.182174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.182183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.182505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.182830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.182838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.183165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.183425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.183434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.183734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.183966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.183976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.184298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.184527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.184536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.184901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.185226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.185235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.185505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.185802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.185811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.186130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.186449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.186457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.186785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.187104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.187114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.187279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.187608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.187617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.187925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.188244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.188272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.188608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.188963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.188972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.189219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.189564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.189573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.189897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.190143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.190152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.190347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.190646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.190654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.190910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.191223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.191232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.191471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.191790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.191798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.192072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.192257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.192265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.192592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.192851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.192860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.193090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.193401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.193410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.193759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.194087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.194096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.194421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.194742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.194751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.195051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.195348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.195357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.195705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.196029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.196038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.196283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.196627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.196636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.196942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.197180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.197189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.197467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.197722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.197731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.198000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.198323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.198332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.198557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.198832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.198841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.558 qpair failed and we were unable to recover it. 00:32:13.558 [2024-06-11 15:17:32.199171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.199516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.558 [2024-06-11 15:17:32.199525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.199850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.200174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.200182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.200512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.200885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.200894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.201245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.201569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.201578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.201905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.202145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.202154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.202392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.202688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.202696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.202993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.203261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.203270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.203589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.203835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.203843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.204109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.204434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.204443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.204742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.205010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.205018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.205289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.205635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.205644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.205968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.206291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.206300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.206627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.206900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.206909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.207239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.207616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.207625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.207882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.208222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.208231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.208597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.208893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.208902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.209227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.209473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.209482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.209782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.210006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.210015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.210353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.210674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.210683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.210952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.211304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.211313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.211617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.211844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.211853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.212157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.212458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.212467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.212805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.213124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.213133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.213455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.213681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.213690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.213955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.214222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.214230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.214514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.214817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.214826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.215074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.215310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.215319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.215669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.215970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.215979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.216305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.216630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.216638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.216828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.217153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.217162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.217490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.217763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.217771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.218112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.218426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.218435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.218684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.219006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.219015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.219350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.219692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.219701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.219961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.220267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.220276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.220524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.220781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.220789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.221038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.221332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.221341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.221585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.221879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.221888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.222135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.222361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.222370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.222666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.222969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.222978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.223233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.223473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.223482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.559 qpair failed and we were unable to recover it. 00:32:13.559 [2024-06-11 15:17:32.223662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.559 [2024-06-11 15:17:32.223966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.223975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.224220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.224567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.224576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.224836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.225134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.225143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.225442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.225699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.225708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.226037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.226362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.226371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.226638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.226964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.226973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.227295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.227603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.227611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.227952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.228248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.228258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.228556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.228902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.228911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.229180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.229415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.229424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.229723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.229961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.229969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.230299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.230618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.230626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.230906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.231223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.231232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.231528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.231883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.231893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.232194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.232427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.232440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.232686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.232980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.232989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.233319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.233585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.233594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.233783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.234098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.234106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.234403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.234723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.234731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.235061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.235385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.235393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.235711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.236035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.236045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.236391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.236721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.236729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.237000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.237245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.237254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.237481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.237721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.237730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.238006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.238341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.238352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.238580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.238874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.238883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.239206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.239527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.239536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.239859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.240105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.240114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.240346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.240670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.240678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.241013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.241386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.241395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.241742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.241983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.241991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.242316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.242583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.242592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.242939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.243187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.243196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.243517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.243814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.243823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.244161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.244489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.244499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.244771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.245085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.245094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.245394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.245641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.245649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.245969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.246297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.246306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.246584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.246913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.246922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.247239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.247533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.247542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.247822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.248118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.248127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.560 qpair failed and we were unable to recover it. 00:32:13.560 [2024-06-11 15:17:32.248423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.560 [2024-06-11 15:17:32.248725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.248733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.249047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.249285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.249294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.249462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.249702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.249711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.250008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.250349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.250360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.250658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.250832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.250840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.251192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.251526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.251535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.251853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.252123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.252132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.252456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.252728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.252737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.252910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.253083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.253092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.253391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.253745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.253754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.253996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.254296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.254305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.254547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.254842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.254851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.255170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.255492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.255501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.255770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.256004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.256013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.256337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.256664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.256673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.256950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.257188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.257197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.257531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.257849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.257858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.258209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.258560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.258568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.258812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.259162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.259171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.259528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.259879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.259888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.260140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.260426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.260436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.260667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.260993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.261002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.261325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.261677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.261686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.262037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.262275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.262284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.262586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.262933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.262942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.263214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.263531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.263539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.263841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.264137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.264146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.264487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.264733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.264742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.265064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.265361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.265369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.265561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.265791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.265799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.266120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.266442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.266451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.266634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.266898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.266906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.267153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.267390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.267398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.267665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.267988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.267997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.268243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.268565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.268573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.268910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.269088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.269097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.561 [2024-06-11 15:17:32.269446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.269701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.561 [2024-06-11 15:17:32.269709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.561 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.270050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.270275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.270283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.270606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.270924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.270933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.271259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.271574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.271582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.271907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.272242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.272251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.272496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.272820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.272829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.273013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.273286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.273295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.273609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.273934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.273943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.274188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.274508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.274517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.274785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.275033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.275043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.275341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.275678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.275686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.275879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.276230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.276239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.276505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.276779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.276788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.277032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.277261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.277269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.277512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.277860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.277869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.278065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.278388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.278396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.278639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.278963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.278972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.279293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.279616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.279625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.279923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.280162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.280171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.280499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.280803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.280811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.281056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.281281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.281290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.281614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.281945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.281954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.282276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.282519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.282528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.282774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.283105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.283114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.283428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.283749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.283758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.284055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.284380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.284388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.284655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.284969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.284978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.285308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.285629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.285638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.285938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.286185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.286194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.286442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.286704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.286713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.286901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.287129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.287138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.287493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.287817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.287826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.288154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.288392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.288401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.288695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.289046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.289055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.289301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.289623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.289632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.289813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.290117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.290126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.290369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.290690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.290698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.290994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.291249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.291257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.291490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.291816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.291824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.292152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.292418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.292427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.292728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.293075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.293084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.562 [2024-06-11 15:17:32.293409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.293731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.562 [2024-06-11 15:17:32.293740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.562 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.294036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.294261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.294270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.294530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.294846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.294855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.295158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.295483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.295491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.295673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.295972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.295981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.296331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.296557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.296566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.296841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.297138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.297147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.297471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.297766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.297774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.298016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.298331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.298341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.298667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.298986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.298995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.299301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.299622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.299630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.299953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.300280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.300289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.300530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.300852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.300860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.301043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.301375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.301383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.301616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.301889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.301898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.302226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.302550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.302558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.302793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.303119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.303128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.303397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.303677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.303686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.304026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.304337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.304346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.304671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.304972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.304980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.305306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.305599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.305608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.305905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.306153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.306161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.306476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.306783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.306791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.307095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.307341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.307350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.307585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.307825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.307834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.308156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.308389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.308398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.308726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.309051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.309060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.309310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.309551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.309560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.309893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.310208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.310217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.310537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.310775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.310784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.311114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.311379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.311388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.311656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.311969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.311978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.312302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.312625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.312633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.312955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.313193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.313202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.313531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.313758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.313767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.563 qpair failed and we were unable to recover it. 00:32:13.563 [2024-06-11 15:17:32.314090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.563 [2024-06-11 15:17:32.314337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.314346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.314663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.314964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.314973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.315271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.315615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.315625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.315954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.316202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.316210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.316369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.316620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.316629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.316949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.317181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.317190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.317539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.317863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.317872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.318105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.318432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.318441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.318772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.319114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.319124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.319398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.319738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.319747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.320058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.320356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.320365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.320713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.321043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.321052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.321325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.321669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.321678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.321925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.322234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.322243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.322541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.322724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.322733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.322980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.323305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.323315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.323580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.323879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.323888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.324165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.324484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.324493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.324839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.325089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.325098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.325420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.325743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.325751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.326068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.326255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.326263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.326490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.326728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.326736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.327033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.327330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.327340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.327666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.327988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.327997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.328325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.328566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.328575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.328894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.329191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.329199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.329496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.329847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.329856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.330182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.330510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.330518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.330838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.331164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.331173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.331414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.331699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.564 [2024-06-11 15:17:32.331708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.564 qpair failed and we were unable to recover it. 00:32:13.564 [2024-06-11 15:17:32.332011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.332258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.332268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.332511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.332761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.332771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.333100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.333425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.333437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.333754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.334078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.334087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.334383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.334682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.334690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.335017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.335286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.335295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.335594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.335915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.335924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.336189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.336420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.336428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.336708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.337053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.337062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.337420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.337592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.337601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.337925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.338197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.338206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.338558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.338793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.338802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.339131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.339462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.339472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.339715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.340032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.340041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.340362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.340702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.340711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.341033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.341389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.341397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.341716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.342012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.342021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.342374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.342693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.342702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.342973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.343293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.343302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.343625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.343947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.343956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.344199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.344522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.344530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.344835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.345157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.345167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.345408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.345660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.345671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.345974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.346268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.346277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.346600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.346838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.346847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.347176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.347355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.347363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.347609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.347903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.347912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.348086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.348341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.348350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.348588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.348939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.348947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.349302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.349551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.349560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.349906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.350230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.350239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.350561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.350806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.350815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.351136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.351464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.351472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.351800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.352054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.352063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.352302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.352647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.352656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.565 qpair failed and we were unable to recover it. 00:32:13.565 [2024-06-11 15:17:32.352982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.565 [2024-06-11 15:17:32.353328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.353337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.353694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.353932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.353941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.354191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.354544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.354553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.354818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.355140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.355149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.355392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.355716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.355725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.356048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.356373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.356381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.356653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.356890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.356898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.357226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.357549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.357558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.357748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.358077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.358086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.358400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.358644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.358653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.358986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.359165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.359174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.359431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.359672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.359680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.360014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.360259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.360268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.360547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.360858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.360867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.361155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.361500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.361508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.361783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.362130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.362139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.362328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.362576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.362585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.362813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.363112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.363121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.363445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.363768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.363777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.364104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.364422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.364431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.364728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.364964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.364973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.365293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.365618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.365627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.365898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.366219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.366228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.366497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.366822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.366830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.367060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.367387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.367396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.367580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.367901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.367909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.368154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.368378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.368387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.368739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.369048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.369057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.369326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.369577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.369585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.369909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.370210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.370218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.370545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.370892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.370900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.371151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.371379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.371388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.371709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.371982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.371990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.372314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.372610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.372619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.372863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.373179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.373188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.373514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.373839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.373848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.374172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.374499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.374508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.566 qpair failed and we were unable to recover it. 00:32:13.566 [2024-06-11 15:17:32.374747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.566 [2024-06-11 15:17:32.375016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.375028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.375287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.375599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.375608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.375934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.376262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.376271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.376620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.376944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.376953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.377252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.377576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.377585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.377930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.378168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.378177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.378504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.378778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.378786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.379059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.379241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.379250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.379492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.379757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.379765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.380040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.380379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.380388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.380724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.381067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.381075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.381438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.381735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.381744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.382041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.382337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.382346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.382537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.382876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.382885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.383216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.383533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.383542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.383865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.384185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.384194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.384462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.384760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.384768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.385099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.385337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.385346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.385617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.385942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.385950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.567 [2024-06-11 15:17:32.386249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.386572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.567 [2024-06-11 15:17:32.386580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.567 qpair failed and we were unable to recover it. 00:32:13.836 [2024-06-11 15:17:32.386901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.836 [2024-06-11 15:17:32.387229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.836 [2024-06-11 15:17:32.387238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.836 qpair failed and we were unable to recover it. 00:32:13.836 [2024-06-11 15:17:32.387560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.836 [2024-06-11 15:17:32.387796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.836 [2024-06-11 15:17:32.387804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.836 qpair failed and we were unable to recover it. 00:32:13.836 [2024-06-11 15:17:32.388045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.836 [2024-06-11 15:17:32.388340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.836 [2024-06-11 15:17:32.388348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.836 qpair failed and we were unable to recover it. 00:32:13.836 [2024-06-11 15:17:32.388699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.389023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.389037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.389306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.389652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.389661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.389938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.390254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.390263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.390588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.390840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.390849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.391076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.391312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.391321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.391618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.391913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.391922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.392280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.392577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.392585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.392827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.393054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.393064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.393425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.393732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.393741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.394081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.394336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.394345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.394655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.394931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.394939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.395264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.395529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.395537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.395838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.396138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.396154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.396410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.396733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.396742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.397010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.397289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.397298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.397636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.397969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.397978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.398293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.398588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.398597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.398895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.399242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.399251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.399631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.399958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.399966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.400269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.400563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.400572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.400838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.401159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.401168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.401464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.401709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.401717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.402066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.402419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.402428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.402672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.402898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.402907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.403261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.403507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.403516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.403783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.404078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.404087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.404441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.404678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.404687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.405010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.405337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.405346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.837 [2024-06-11 15:17:32.405669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.405994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.837 [2024-06-11 15:17:32.406002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.837 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.406253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.406573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.406581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.406934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.407180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.407189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.407421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.407780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.407789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.408056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.408285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.408293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.408620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.408948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.408956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.409277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.409601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.409610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.409873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.410100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.410109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.410368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.410714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.410723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.410986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.411234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.411242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.411471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.411739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.411748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.412076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.412371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.412380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.412648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.412969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.412977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.413210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.413459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.413467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.413770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.414097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.414106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.414450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.414675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.414684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.415007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.415346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.415355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.415721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.416066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.416075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.416402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.416640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.416649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.416877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.417224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.417233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.417556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.417801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.417812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.418047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.418345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.418354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.418529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.418825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.418834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.419136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.419483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.419491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.419744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.420107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.420116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.420475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.420711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.420719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.838 qpair failed and we were unable to recover it. 00:32:13.838 [2024-06-11 15:17:32.421027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.838 [2024-06-11 15:17:32.421352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.421360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.421612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.421903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.421912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.422239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.422538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.422546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.422841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.423083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.423092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.423335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.423576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.423587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.423937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.424290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.424299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.424652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.424879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.424888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.425198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.425445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.425453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.425778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.426096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.426105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.426400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.426704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.426713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.427033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.427362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.427370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.427636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.427961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.427970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.428293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.428481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.428489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.428735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.429003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.429012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.429197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.429431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.429441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.429739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.430086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.430095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.430335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.430659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.430668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.430995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.431317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.431326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.431625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.431897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.431906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.432202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.432528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.432536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.432868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.433197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.433206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.433455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.433774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.433782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.434030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.434259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.434267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.434617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.434868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.434876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.435129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.435370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.435380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.435712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.436033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.436042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.436363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.436657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.436666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.436992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.437316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.839 [2024-06-11 15:17:32.437326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.839 qpair failed and we were unable to recover it. 00:32:13.839 [2024-06-11 15:17:32.437584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.437891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.437900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.438237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.438515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.438524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.438761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.439003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.439012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.439300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.439627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.439637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.440039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.440226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.440237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.440536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.440801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.440810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.441091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.441404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.441413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.441778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.442048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.442058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.442396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.442724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.442733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.442999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.443241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.443251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.443578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.443839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.443849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.444176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.444528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.444537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.444809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.445132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.445142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.445441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.445687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.445696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.446021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.446259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.446270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.446596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.446897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.446907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.447172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.447412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.447420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.447781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.448134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.448144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.448483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.448783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.448793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.449143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.449433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.449442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.449620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.449865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.449874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.450143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.450412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.450421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.450610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.450793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.450802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.451065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.451404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.451413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.451665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.451909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.451918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.452243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.452504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.452513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.452698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.452932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.452942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.840 qpair failed and we were unable to recover it. 00:32:13.840 [2024-06-11 15:17:32.453324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.453522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.840 [2024-06-11 15:17:32.453532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.453780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.454079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.454088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.454388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.454666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.454674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.455028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.455281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.455290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.455534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.455893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.455901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.456259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.456524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.456534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.456798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.457041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.457050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.457240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.457426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.457435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.457757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.457932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.457943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.458277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.458576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.458585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.458838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.459158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.459168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.459355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.459604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.459614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.459966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.460215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.460224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.460385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.460585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.460594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.460906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.461083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.461092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.461342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.461639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.461648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.461902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.462146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.462156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.462455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.462775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.462784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.463044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.463341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.463350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.463517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.463840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.463849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.464137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.464309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.464318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.464513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.464700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.464709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.841 qpair failed and we were unable to recover it. 00:32:13.841 [2024-06-11 15:17:32.464974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.841 [2024-06-11 15:17:32.465262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.465271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.465441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.465761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.465770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.466036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.466304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.466312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.466559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.466802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.466811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.467144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.467333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.467342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.467533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.467710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.467718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.467945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.468189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.468199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.468431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.468657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.468666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.468944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.469125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.469135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.469435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.469770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.469779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.469959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.470151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.470160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.470354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.470589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.470597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.470846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.471029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.471038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.471362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.471628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.471637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.471850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.472118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.472127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.472369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.472550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.472559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.472741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.472909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.472918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.473148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.473468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.473478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.473655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.473814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.473823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.474063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.474246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.474255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.474550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.474788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.474798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.475050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.475295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.475304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.475548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.475858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.475867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.476114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.476288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.476296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.476596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.476922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.476931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.477120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.477384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.477393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.477748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.477927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.477937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.478126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.478246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.842 [2024-06-11 15:17:32.478254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.842 qpair failed and we were unable to recover it. 00:32:13.842 [2024-06-11 15:17:32.478442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.478739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.478748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.478991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.479232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.479242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.479433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.479664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.479673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.479917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.480211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.480220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.480542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.480714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.480723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.480977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.481230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.481239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.481472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.481680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.481689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.481933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.482169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.482179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.482422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.482645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.482653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.482984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.483280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.483289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.483586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.483822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.483831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.484027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.484270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.484279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.484504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.484736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.484744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.484987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.485312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.485321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.485639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.485864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.485873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.486117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.486297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.486306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.486629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.486859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.486868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.487113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.487302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.487310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.487546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.487812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.487821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.487986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.488225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.488235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.488530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.488690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.488699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.488966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.489137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.489146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.489494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.489787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.489796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.489975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.490235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.490245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.490513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.490777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.490785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.491051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.491293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.491301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.491596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.491845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.491854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.492125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.492309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.492317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.843 [2024-06-11 15:17:32.492638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.492829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.843 [2024-06-11 15:17:32.492838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.843 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.493077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.493319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.493328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.493623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.493948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.493957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.494259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.494505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.494514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.494746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.494992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.495001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.495257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.495503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.495512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.495768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.496031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.496041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.496272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.496568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.496577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.496828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.497124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.497133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.497369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.497664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.497673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.498001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.498181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.498190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.498519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.498758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.498767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.499115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.499300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.499309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.499539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.499834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.499842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.500013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.500311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.500321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.500598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.500841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.500850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.501032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.501220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.501229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.501498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.501836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.501845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.502085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.502243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.502252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.502367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.502686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.502695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.502937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.503231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.503240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.503470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.503732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.503741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.503918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.504239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.504250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.504425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.504724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.504733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.505004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.505245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.505254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.505551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.505883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.505892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.506135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.506328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.506336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.506514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.506701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.506709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.506896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.507140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.507149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.844 [2024-06-11 15:17:32.507447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.507703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.844 [2024-06-11 15:17:32.507712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.844 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.507958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.508256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.508264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.508455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.508781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.508790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.509019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.509331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.509342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.509694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.509887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.509896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.510163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.510342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.510350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.510590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.510817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.510826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.511096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.511340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.511349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.511537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.511762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.511770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.511944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.512268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.512278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.512527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.512720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.512729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.512993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.513311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.513320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.513549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.513819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.513827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.514070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.514306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.514316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.514557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.514915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.514923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.515224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.515474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.515483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.515824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.516128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.516137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.516319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.516640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.516649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.516837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.517011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.517020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.517267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.517497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.517506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.517687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.517983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.517992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.518238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.518511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.518520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.518772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.518950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.518959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.519226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.519396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.519406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.519651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.519875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.519884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.520125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.520419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.520428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.520592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.520913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.520923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.521180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.521478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.521487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.521676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.521943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.521952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.845 [2024-06-11 15:17:32.522221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.522529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.845 [2024-06-11 15:17:32.522538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.845 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.522790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.522974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.522982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.523277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.523577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.523587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.523814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.523918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.523927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.524255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.524408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.524416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.524733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.525086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.525095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.525335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.525680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.525689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.525955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.526279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.526288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.526527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.526763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.526772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.527124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.527449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.527458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.527720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.527983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.527991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.528236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.528490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.528499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.528802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.529032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.529041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.529289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.529533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.529541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.529794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.529991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.529999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.530180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.530502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.530511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.530748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.530923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.530931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.531116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.531271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.531280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.531573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.531900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.531908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.532186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.532444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.532453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.532696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.532932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.532942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.533185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.533409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.533418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.533591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.533822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.533831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.534127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.534353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.534362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.534669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.534843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.534852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.846 qpair failed and we were unable to recover it. 00:32:13.846 [2024-06-11 15:17:32.535153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.846 [2024-06-11 15:17:32.535502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.535512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.535751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.536023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.536035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.536280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.536539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.536549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.536785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.536962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.536971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.537219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.537519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.537527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.537782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.538085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.538094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.538324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.538497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.538505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.538853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.539150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.539159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.539434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.539776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.539784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.539901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.540201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.540210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.540455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.540697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.540707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.541005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.541246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.541255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.541546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.541801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.541810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.542077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.542404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.542412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.542735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.543008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.543016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.543290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.543539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.543548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.543848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.544120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.544129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.544430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.544659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.544668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.544924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.545193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.545202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.545439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.545762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.545770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.545956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.546199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.546208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.546448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.546744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.546753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.546983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.547238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.547247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.547484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.547721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.547730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.548032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.548220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.548229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.548480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.548795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.548804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.549038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.549282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.549291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.549616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.549861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.549870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.550112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.550367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.550376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.847 qpair failed and we were unable to recover it. 00:32:13.847 [2024-06-11 15:17:32.550672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.847 [2024-06-11 15:17:32.551000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.551009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.551241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.551495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.551504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.551747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.552030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.552039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.552227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.552527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.552536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.552778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.553104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.553113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.553308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.553540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.553548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.553732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.553911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.553920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.554185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.554427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.554436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.554683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.555006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.555014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.555185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.555486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.555495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.555772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.556031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.556040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.556379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.556611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.556620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.556947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.557250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.557259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.557501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.557834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.557843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.558165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.558485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.558494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.558834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.559072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.559081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.559383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.559721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.559730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.559967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.560260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.560269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.560594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.560921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.560930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.561257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.561557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.561566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.561732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.561910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.561919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.562217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.562590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.562599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.562932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.563228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.563237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.563563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.563816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.563825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.564096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.564407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.564416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.564719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.564945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.564954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.565214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.565556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.565564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.565733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.566061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.566070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.566312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.566611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.848 [2024-06-11 15:17:32.566619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.848 qpair failed and we were unable to recover it. 00:32:13.848 [2024-06-11 15:17:32.566945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.567192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.567201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.567390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.567630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.567639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.567960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.568290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.568299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.568539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.568773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.568782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.569077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.569345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.569354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.569601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.569872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.569880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.570154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.570403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.570411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.570677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.570905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.570914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.571212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.571443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.571452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.571747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.571934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.571943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.572117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.572415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.572425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.572663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.572957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.572966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.573264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.573573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.573582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.573809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.574115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.574125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.574304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.574552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.574561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.574787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.575085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.575094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.575322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.575644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.575653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.575841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.576027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.576036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.576307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.576606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.576615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.576856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.577083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.577092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.577391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.577564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.577573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.577873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.578193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.578203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.578458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.578762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.578771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.579035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.579276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.579285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.579469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.579709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.579718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.579978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.580158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.580166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.580411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.580763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.580773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.581100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.581363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.581372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.581667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.581936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.849 [2024-06-11 15:17:32.581945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.849 qpair failed and we were unable to recover it. 00:32:13.849 [2024-06-11 15:17:32.582241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.582507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.582516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.582746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.582976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.582985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.583165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.583409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.583418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.583660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.583985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.583995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.584172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.584344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.584353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.584651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.584905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.584914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.585262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.585521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.585530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.585704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.585998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.586007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.586249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.586486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.586495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.586821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.587160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.587170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.587438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.587680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.587689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.587987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.588166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.588176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.588403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.588747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.588757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.588941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.589112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.589123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.589368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.589614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.589624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.589871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.590190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.590200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.590497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.590671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.590680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.590862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.591192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.591202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.591449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.591684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.591694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.591939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.592122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.592132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.592397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.592734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.592743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.593002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.593298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.593307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.593554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.593798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.593807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.594130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.594370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.594381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.850 [2024-06-11 15:17:32.594606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.594845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.850 [2024-06-11 15:17:32.594856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.850 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.595185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.595503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.595512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.595810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.596063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.596073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.596320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.596504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.596514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.596811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.597053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.597063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.597392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.597689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.597699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.598029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.598383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.598392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.598624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.598821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.598830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.599100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.599333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.599343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.599654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.599893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.599904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.600088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.600383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.600392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.600692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.600921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.600931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.601158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.601483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.601492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.601760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.601928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.601937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.602102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.602280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.602290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.602613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.602868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.602877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.603190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.603450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.603459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.603694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.603989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.603998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.604191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.604525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.604534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.604858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.605037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.605048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.605277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.605572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.605582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.605881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.606072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.606082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.606357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.606622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.606630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.606870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.607124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.607134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.607484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.607745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.607754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.608082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.608332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.608341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.608569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.608865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.608875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.608999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.609241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.609251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.609546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.609726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.609735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.851 [2024-06-11 15:17:32.610063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.610366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.851 [2024-06-11 15:17:32.610375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.851 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.610674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.610972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.610981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.611229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.611401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.611410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.611711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.611881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.611892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.612069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.612421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.612430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.612727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.612885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.612895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.613167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.613463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.613472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.613747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.613979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.613989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.614233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.614457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.614466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.614652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.614945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.614954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.615200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.615437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.615446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.615686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.615924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.615933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.616185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.616482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.616491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.616725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.616969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.616978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.617277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.617464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.617474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.617769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.618012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.618021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.618257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.618519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.618528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.618800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.619098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.619108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.619403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.619700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.619709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.619938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.620120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.620130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.620457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.620698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.620708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.620953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.621309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.621319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.621587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.621881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.621890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.622132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.622376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.622385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.622630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.622884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.622894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.623133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.623379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.623388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.623584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.623904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.623913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.624165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.624480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.624489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.624720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.624892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.624902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.852 [2024-06-11 15:17:32.625198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.625422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.852 [2024-06-11 15:17:32.625432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.852 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.625757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.626080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.626090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.626267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.626560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.626569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.626767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.627012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.627021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.627271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.627597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.627606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.627794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.628040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.628049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.628293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.628619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.628628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.628858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.629094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.629103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.629368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.629609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.629619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.629844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.630143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.630153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.630449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.630618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.630626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.630927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.631193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.631203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.631526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.631705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.631715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.632013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.632339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.632348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.632577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.632746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.632755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.632998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.633230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.633240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.633491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.633666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.633674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.633970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.634295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.634304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.634574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.634869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.634878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.635206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.635460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.635469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.635708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.635882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.635891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.636187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.636411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.636420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.636600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.636861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.636870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.637110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.637457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.637467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.637817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.638053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.638063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.638337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.638513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.638522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.638770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.639029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.639039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.639277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.639542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.639551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.639861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.640101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.640110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.853 [2024-06-11 15:17:32.640432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.640700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.853 [2024-06-11 15:17:32.640710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.853 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.641038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.641271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.641281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.641462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.641700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.641709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.641974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.642237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.642247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.642473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.642795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.642804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.643044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.643338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.643347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.643520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.643765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.643775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.644126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.644394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.644403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.644730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.645056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.645066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.645402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.645679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.645688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.646005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.646274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.646284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.646532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.646778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.646787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.647033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.647327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.647336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.647606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.647851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.647861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.648048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.648372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.648382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.648614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.648841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.648850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.649040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.649214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.649223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.649469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.649710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.649719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.650032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.650293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.650302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.650540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.650808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.650817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.650997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.651299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.651310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.651549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.651850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.651859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.652162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.652484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.652494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.652723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.653050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.653059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.653378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.653620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.653629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.653863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.654176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.654186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.654439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.654754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.654763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.654998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.655248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.655257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.655435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.655673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.655682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.854 [2024-06-11 15:17:32.655936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.656110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.854 [2024-06-11 15:17:32.656120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.854 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.656454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.656704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.656714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.656978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.657285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.657294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.657524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.657764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.657773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.658044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.658288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.658296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.658554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.658916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.658924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.659111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.659345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.659354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.659704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.660032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.660041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.660365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.660690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.660698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.660967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.661266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.661274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.661573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.661920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.661929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.662178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.662499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.662507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.662746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.662943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.662952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.663238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.663588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.663597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.663904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.664162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.664171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.664363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.664683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.664691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.665020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.665208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.665217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.665480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.665802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.665810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.666132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.666390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.666398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.666693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.666922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.666930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.667167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.667516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.667524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:13.855 [2024-06-11 15:17:32.667843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.668094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.855 [2024-06-11 15:17:32.668103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:13.855 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.668428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.668744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.668752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.669050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.669273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.669282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.669596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.669842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.669853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.670099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.670357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.670366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.670694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.671017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.671029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.671351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.671621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.671630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.671872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.672120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.672129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.672432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.672673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.672681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.672999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.673296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.673306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.673632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.673903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.673912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.674265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.674564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.674573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.674896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.675278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.675287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.675561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.675920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.675930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.676201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.676472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.676481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.676741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.677066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.677074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.677400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.677669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.677677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.677914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.678200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.678209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.120 qpair failed and we were unable to recover it. 00:32:14.120 [2024-06-11 15:17:32.678518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.120 [2024-06-11 15:17:32.678841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.678850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.679188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.679553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.679561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.679748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.680050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.680059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.680410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.680710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.680718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.681043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.681280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.681289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.681613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.681934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.681945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.682195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.682511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.682519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.682847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.683170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.683179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.683505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.683827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.683836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.684170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.684497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.684505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.684833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.685152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.685161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.685409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.685756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.685764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.686142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.686438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.686447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.686770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.687016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.687028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.687369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.687610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.687619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.687916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.688268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.688280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 15:17:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:14.121 15:17:32 -- common/autotest_common.sh@852 -- # return 0 00:32:14.121 [2024-06-11 15:17:32.688605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.688878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.688888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 15:17:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:14.121 [2024-06-11 15:17:32.689125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 15:17:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:14.121 [2024-06-11 15:17:32.689447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.689457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 15:17:32 -- common/autotest_common.sh@10 -- # set +x 00:32:14.121 [2024-06-11 15:17:32.689775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.690029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.690038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.690347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.690590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.690599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.690951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.691302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.691311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.691548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.691853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.691863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.692176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.692518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.692527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.692830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.693134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.693145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.693486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.693681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.693691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.694008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.694329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.694338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.694674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.694946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.694955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.121 [2024-06-11 15:17:32.695234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.695549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.121 [2024-06-11 15:17:32.695558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.121 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.695808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.696104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.696113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.696462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.696787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.696797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.697087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.697464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.697472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.697830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.698177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.698186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.698405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.698726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.698735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.699035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.699223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.699232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.699477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.699703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.699712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.699960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.700187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.700197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.700552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.700847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.700856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.701099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.701397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.701407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.701710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.702052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.702061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.702301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.702623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.702632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.702960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.703289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.703298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.703542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.703815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.703824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.704082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.704266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.704275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.704521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.704819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.704828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.705144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.705416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.705424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.705672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.706023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.706037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.706343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.706681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.706689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.706844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.707167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.707177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.707368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.707686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.707695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.707994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.708327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.708337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.708580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.708890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.708899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.709207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.709453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.709461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.709700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.709941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.709950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.710215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.710483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.710492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.710826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.711166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.711175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.711384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.711579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.711589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.122 qpair failed and we were unable to recover it. 00:32:14.122 [2024-06-11 15:17:32.711938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.122 [2024-06-11 15:17:32.712173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.712183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.712456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.712838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.712848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.713100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.713361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.713370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.713701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.713936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.713944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.714238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.714483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.714492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.714820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.715079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.715089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.715298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.715497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.715506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.715686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.715959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.715968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.716215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.716519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.716528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.716716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.716964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.716974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.717301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.717546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.717555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.717842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.718175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.718184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.718469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.718719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.718728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.719059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.719409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.719419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.719605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.719875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.719884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.720146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.720332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.720341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.720535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.720895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.720904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.721184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.721377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.721385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 15:17:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.123 [2024-06-11 15:17:32.721586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.721835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.721845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 15:17:32 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:14.123 [2024-06-11 15:17:32.722191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 15:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.123 [2024-06-11 15:17:32.722546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 15:17:32 -- common/autotest_common.sh@10 -- # set +x 00:32:14.123 [2024-06-11 15:17:32.722558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.722841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.723104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.723113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.723390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.723650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.723660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.723912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.724184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.724193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.724470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.724662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.724671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.724973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.725132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.725142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.725417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.725613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.725621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.725986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.726248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.726257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.726582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.726844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.123 [2024-06-11 15:17:32.726853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.123 qpair failed and we were unable to recover it. 00:32:14.123 [2024-06-11 15:17:32.727183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.727504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.727515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.727836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.728165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.728174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.728421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.728602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.728611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.728785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.729114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.729123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.729398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.729589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.729597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.729961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.730212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.730221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.730525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.730853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.730863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.731170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.731418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.731427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.731758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.732003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.732013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.732285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.732482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.732491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.732820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.733064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.733076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.733342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.733506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.733516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.733712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.734011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.734020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.734353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.734545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.734554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.734892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.735204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.735214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.735521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.735710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.735719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.736019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.736386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.736395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.736714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.736959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.736968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.737308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.737564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.737573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.737951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.738278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.738288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.738534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.738850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.738864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.739198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.739440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.739449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.739687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.740038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.740048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.740372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.740571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.740580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.124 qpair failed and we were unable to recover it. 00:32:14.124 [2024-06-11 15:17:32.740933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.741253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.124 [2024-06-11 15:17:32.741262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.741566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.741913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.741922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.742248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.742494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.742502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.742817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.743113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.743122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.743452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 Malloc0 00:32:14.125 [2024-06-11 15:17:32.743766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.743775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.744008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 15:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.125 [2024-06-11 15:17:32.744355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.744365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 15:17:32 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:14.125 [2024-06-11 15:17:32.744682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 15:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.125 [2024-06-11 15:17:32.744927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.744936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 15:17:32 -- common/autotest_common.sh@10 -- # set +x 00:32:14.125 [2024-06-11 15:17:32.745285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.745582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.745590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.745825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.746149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.746158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.746410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.746748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.746756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.747023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.747285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.747294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.747623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.747959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.747967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.748252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.748548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.748557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.748877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.749176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.749185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.749430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.749658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.749667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.750015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.750358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.750367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.750696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.751019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.751030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.751136] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.125 [2024-06-11 15:17:32.751306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.751582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.751591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.751841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.752156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.752164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.752401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.752585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.752593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.752919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.753149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.753159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.753386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.753680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.753689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.754022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.754353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.754362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.754687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.755035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.755044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.755315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.755659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.755668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.755969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.756214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.756223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.125 qpair failed and we were unable to recover it. 00:32:14.125 [2024-06-11 15:17:32.756552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.125 [2024-06-11 15:17:32.756732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.756741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.757068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.757367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.757376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.757623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.757939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.757948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.758187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.758481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.758490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.758747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.759086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.759095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.759358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.759678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.759686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 15:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.126 [2024-06-11 15:17:32.759964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 15:17:32 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:14.126 [2024-06-11 15:17:32.760197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.760206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 15:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.126 [2024-06-11 15:17:32.760532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 15:17:32 -- common/autotest_common.sh@10 -- # set +x 00:32:14.126 [2024-06-11 15:17:32.760855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.760864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.761186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.761428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.761436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.761789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.762143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.762152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.762454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.762698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.762707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.763034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.763374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.763383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.763705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.763966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.763975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.764244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.764572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.764581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.764856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.765198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.765207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.765459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.765688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.765696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.765995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.766301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.766310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.766650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.766914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.766923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.767259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.767509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.767518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 15:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.126 [2024-06-11 15:17:32.767842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.768036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.768045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 15:17:32 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:14.126 [2024-06-11 15:17:32.768372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 15:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.126 [2024-06-11 15:17:32.768691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.768701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 15:17:32 -- common/autotest_common.sh@10 -- # set +x 00:32:14.126 [2024-06-11 15:17:32.769023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.769318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.769326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.769659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.769968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.769977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.770300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.770570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.770579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.770933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.771176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.771185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.771530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.771828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.771837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.772085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.772314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.772323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.126 qpair failed and we were unable to recover it. 00:32:14.126 [2024-06-11 15:17:32.772552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.126 [2024-06-11 15:17:32.772799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.772808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.773082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.773406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.773414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.773726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.774050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.774059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.774379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.774624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.774633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.774957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.775281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.775290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.775530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.775825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 15:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.127 [2024-06-11 15:17:32.775834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.776157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 15:17:32 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.127 [2024-06-11 15:17:32.776391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.776400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 15:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.127 [2024-06-11 15:17:32.776653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 15:17:32 -- common/autotest_common.sh@10 -- # set +x 00:32:14.127 [2024-06-11 15:17:32.776974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.776984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.777307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.777602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.777611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.777937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.778237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.778246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.778514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.778685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.778694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.779027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.779265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.127 [2024-06-11 15:17:32.779274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc0a8000b90 with addr=10.0.0.2, port=4420 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.779369] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.127 [2024-06-11 15:17:32.781801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.127 [2024-06-11 15:17:32.781908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.127 [2024-06-11 15:17:32.781926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.127 [2024-06-11 15:17:32.781934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.127 [2024-06-11 15:17:32.781940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.127 [2024-06-11 15:17:32.781960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 15:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.127 15:17:32 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:14.127 15:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.127 15:17:32 -- common/autotest_common.sh@10 -- # set +x 00:32:14.127 [2024-06-11 15:17:32.791801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.127 15:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.127 [2024-06-11 15:17:32.791899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.127 [2024-06-11 15:17:32.791916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.127 [2024-06-11 15:17:32.791922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.127 [2024-06-11 15:17:32.791928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.127 [2024-06-11 15:17:32.791943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 15:17:32 -- host/target_disconnect.sh@58 -- # wait 3485228 00:32:14.127 [2024-06-11 15:17:32.801754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.127 [2024-06-11 15:17:32.801848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.127 [2024-06-11 15:17:32.801864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.127 [2024-06-11 15:17:32.801871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.127 [2024-06-11 15:17:32.801876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.127 [2024-06-11 15:17:32.801891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.811765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.127 [2024-06-11 15:17:32.811865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.127 [2024-06-11 15:17:32.811882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.127 [2024-06-11 15:17:32.811889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.127 [2024-06-11 15:17:32.811894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.127 [2024-06-11 15:17:32.811909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.821721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.127 [2024-06-11 15:17:32.821816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.127 [2024-06-11 15:17:32.821831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.127 [2024-06-11 15:17:32.821837] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.127 [2024-06-11 15:17:32.821843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.127 [2024-06-11 15:17:32.821858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.831715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.127 [2024-06-11 15:17:32.831805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.127 [2024-06-11 15:17:32.831820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.127 [2024-06-11 15:17:32.831826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.127 [2024-06-11 15:17:32.831832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.127 [2024-06-11 15:17:32.831847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.127 [2024-06-11 15:17:32.841778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.127 [2024-06-11 15:17:32.841868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.127 [2024-06-11 15:17:32.841883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.127 [2024-06-11 15:17:32.841889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.127 [2024-06-11 15:17:32.841894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.127 [2024-06-11 15:17:32.841909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.127 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.851797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.851889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.851904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.851910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.851915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.851929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.861898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.861995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.862010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.862016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.862022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.862042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.871883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.871982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.871997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.872004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.872009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.872028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.881882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.881975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.881991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.881997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.882003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.882018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.891896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.891994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.892010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.892016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.892022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.892042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.902012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.902112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.902132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.902139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.902144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.902158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.912006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.912102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.912118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.912125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.912130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.912145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.922012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.922110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.922125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.922131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.922138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.922152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.932041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.932138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.932154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.932161] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.932166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.932181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.942210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.942315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.942331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.942337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.942343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.942360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.128 [2024-06-11 15:17:32.952237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.128 [2024-06-11 15:17:32.952330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.128 [2024-06-11 15:17:32.952346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.128 [2024-06-11 15:17:32.952352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.128 [2024-06-11 15:17:32.952358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.128 [2024-06-11 15:17:32.952373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.128 qpair failed and we were unable to recover it. 00:32:14.389 [2024-06-11 15:17:32.962210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.389 [2024-06-11 15:17:32.962305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.389 [2024-06-11 15:17:32.962320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.389 [2024-06-11 15:17:32.962326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.389 [2024-06-11 15:17:32.962332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.389 [2024-06-11 15:17:32.962346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.389 qpair failed and we were unable to recover it. 00:32:14.389 [2024-06-11 15:17:32.972127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.389 [2024-06-11 15:17:32.972220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.389 [2024-06-11 15:17:32.972236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.389 [2024-06-11 15:17:32.972244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.389 [2024-06-11 15:17:32.972250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.389 [2024-06-11 15:17:32.972266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.389 qpair failed and we were unable to recover it. 00:32:14.389 [2024-06-11 15:17:32.982191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.389 [2024-06-11 15:17:32.982285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.389 [2024-06-11 15:17:32.982300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.389 [2024-06-11 15:17:32.982306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.389 [2024-06-11 15:17:32.982312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.389 [2024-06-11 15:17:32.982326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.389 qpair failed and we were unable to recover it. 00:32:14.389 [2024-06-11 15:17:32.992272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.389 [2024-06-11 15:17:32.992368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.389 [2024-06-11 15:17:32.992388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.389 [2024-06-11 15:17:32.992394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.389 [2024-06-11 15:17:32.992400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.389 [2024-06-11 15:17:32.992415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.389 qpair failed and we were unable to recover it. 00:32:14.389 [2024-06-11 15:17:33.002200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.389 [2024-06-11 15:17:33.002296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.389 [2024-06-11 15:17:33.002312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.389 [2024-06-11 15:17:33.002318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.389 [2024-06-11 15:17:33.002324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.389 [2024-06-11 15:17:33.002339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.389 qpair failed and we were unable to recover it. 00:32:14.389 [2024-06-11 15:17:33.012246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.389 [2024-06-11 15:17:33.012337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.389 [2024-06-11 15:17:33.012352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.389 [2024-06-11 15:17:33.012358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.389 [2024-06-11 15:17:33.012364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.389 [2024-06-11 15:17:33.012378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.389 qpair failed and we were unable to recover it. 00:32:14.389 [2024-06-11 15:17:33.022364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.389 [2024-06-11 15:17:33.022458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.389 [2024-06-11 15:17:33.022473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.389 [2024-06-11 15:17:33.022479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.389 [2024-06-11 15:17:33.022485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.389 [2024-06-11 15:17:33.022500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.389 qpair failed and we were unable to recover it. 00:32:14.389 [2024-06-11 15:17:33.032354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.032446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.032461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.032468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.032476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.032490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.042326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.042422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.042437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.042443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.042448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.042463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.052390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.052483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.052498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.052504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.052509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.052523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.062418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.062516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.062532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.062538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.062544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.062557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.072446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.072540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.072555] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.072561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.072567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.072581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.082495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.082641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.082657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.082663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.082669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.082684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.092464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.092568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.092583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.092588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.092594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.092608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.102507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.102615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.102630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.102636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.102642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.102656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.112606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.112788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.112804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.112810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.112816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.112831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.122640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.122777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.122793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.122800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.122808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.122823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.132674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.132762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.132777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.132783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.132788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.132803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.142591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.142684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.142699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.142705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.142711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.142726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.152739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.152834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.152849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.152855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.152860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.390 [2024-06-11 15:17:33.152874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.390 qpair failed and we were unable to recover it. 00:32:14.390 [2024-06-11 15:17:33.162650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.390 [2024-06-11 15:17:33.162738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.390 [2024-06-11 15:17:33.162754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.390 [2024-06-11 15:17:33.162760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.390 [2024-06-11 15:17:33.162765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.391 [2024-06-11 15:17:33.162779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.391 qpair failed and we were unable to recover it. 00:32:14.391 [2024-06-11 15:17:33.172754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.391 [2024-06-11 15:17:33.172843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.391 [2024-06-11 15:17:33.172858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.391 [2024-06-11 15:17:33.172864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.391 [2024-06-11 15:17:33.172869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.391 [2024-06-11 15:17:33.172883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.391 qpair failed and we were unable to recover it. 00:32:14.391 [2024-06-11 15:17:33.182715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.391 [2024-06-11 15:17:33.182819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.391 [2024-06-11 15:17:33.182834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.391 [2024-06-11 15:17:33.182840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.391 [2024-06-11 15:17:33.182846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.391 [2024-06-11 15:17:33.182860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.391 qpair failed and we were unable to recover it. 00:32:14.391 [2024-06-11 15:17:33.192778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.391 [2024-06-11 15:17:33.192874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.391 [2024-06-11 15:17:33.192890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.391 [2024-06-11 15:17:33.192896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.391 [2024-06-11 15:17:33.192901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.391 [2024-06-11 15:17:33.192915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.391 qpair failed and we were unable to recover it. 00:32:14.391 [2024-06-11 15:17:33.202851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.391 [2024-06-11 15:17:33.202942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.391 [2024-06-11 15:17:33.202958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.391 [2024-06-11 15:17:33.202964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.391 [2024-06-11 15:17:33.202970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.391 [2024-06-11 15:17:33.202984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.391 qpair failed and we were unable to recover it. 00:32:14.391 [2024-06-11 15:17:33.212896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.391 [2024-06-11 15:17:33.213012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.391 [2024-06-11 15:17:33.213031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.391 [2024-06-11 15:17:33.213042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.391 [2024-06-11 15:17:33.213047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.391 [2024-06-11 15:17:33.213063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.391 qpair failed and we were unable to recover it. 00:32:14.391 [2024-06-11 15:17:33.222830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.391 [2024-06-11 15:17:33.222918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.391 [2024-06-11 15:17:33.222933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.391 [2024-06-11 15:17:33.222939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.391 [2024-06-11 15:17:33.222945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.391 [2024-06-11 15:17:33.222959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.391 qpair failed and we were unable to recover it. 00:32:14.651 [2024-06-11 15:17:33.232862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.651 [2024-06-11 15:17:33.232962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.651 [2024-06-11 15:17:33.232977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.651 [2024-06-11 15:17:33.232983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.651 [2024-06-11 15:17:33.232988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.651 [2024-06-11 15:17:33.233003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.651 qpair failed and we were unable to recover it. 00:32:14.651 [2024-06-11 15:17:33.242895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.651 [2024-06-11 15:17:33.242992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.651 [2024-06-11 15:17:33.243007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.651 [2024-06-11 15:17:33.243013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.651 [2024-06-11 15:17:33.243018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.651 [2024-06-11 15:17:33.243038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.651 qpair failed and we were unable to recover it. 00:32:14.651 [2024-06-11 15:17:33.252997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.651 [2024-06-11 15:17:33.253100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.651 [2024-06-11 15:17:33.253115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.651 [2024-06-11 15:17:33.253121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.651 [2024-06-11 15:17:33.253127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.651 [2024-06-11 15:17:33.253141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.651 qpair failed and we were unable to recover it. 00:32:14.651 [2024-06-11 15:17:33.262981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.651 [2024-06-11 15:17:33.263116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.651 [2024-06-11 15:17:33.263133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.651 [2024-06-11 15:17:33.263139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.651 [2024-06-11 15:17:33.263144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.651 [2024-06-11 15:17:33.263158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.651 qpair failed and we were unable to recover it. 00:32:14.651 [2024-06-11 15:17:33.272983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.651 [2024-06-11 15:17:33.273073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.651 [2024-06-11 15:17:33.273088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.651 [2024-06-11 15:17:33.273094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.651 [2024-06-11 15:17:33.273100] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.651 [2024-06-11 15:17:33.273114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.651 qpair failed and we were unable to recover it. 00:32:14.651 [2024-06-11 15:17:33.283099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.651 [2024-06-11 15:17:33.283192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.283207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.283213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.283218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.283232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.293105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.293198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.293213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.293219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.293224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.293239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.303173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.303266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.303281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.303290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.303295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.303310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.313142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.313234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.313249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.313255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.313261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.313275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.323133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.323220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.323235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.323241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.323246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.323260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.333260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.333348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.333364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.333370] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.333378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.333392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.343256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.343346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.343361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.343368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.343373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.343387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.353225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.353323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.353339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.353345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.353351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.353365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.363242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.363337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.363352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.363359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.363364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.363378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.373348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.373440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.373455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.373461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.373466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.373481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.383315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.383454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.383470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.383476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.383482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.383496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.393465] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.393552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.393570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.393576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.393581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.393595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.403445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.403534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.652 [2024-06-11 15:17:33.403549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.652 [2024-06-11 15:17:33.403555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.652 [2024-06-11 15:17:33.403561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.652 [2024-06-11 15:17:33.403574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.652 qpair failed and we were unable to recover it. 00:32:14.652 [2024-06-11 15:17:33.413473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.652 [2024-06-11 15:17:33.413581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.653 [2024-06-11 15:17:33.413596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.653 [2024-06-11 15:17:33.413602] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.653 [2024-06-11 15:17:33.413608] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.653 [2024-06-11 15:17:33.413622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.653 qpair failed and we were unable to recover it. 00:32:14.653 [2024-06-11 15:17:33.423444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.653 [2024-06-11 15:17:33.423536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.653 [2024-06-11 15:17:33.423551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.653 [2024-06-11 15:17:33.423557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.653 [2024-06-11 15:17:33.423562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.653 [2024-06-11 15:17:33.423576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.653 qpair failed and we were unable to recover it. 00:32:14.653 [2024-06-11 15:17:33.433508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.653 [2024-06-11 15:17:33.433597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.653 [2024-06-11 15:17:33.433612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.653 [2024-06-11 15:17:33.433618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.653 [2024-06-11 15:17:33.433624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.653 [2024-06-11 15:17:33.433642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.653 qpair failed and we were unable to recover it. 00:32:14.653 [2024-06-11 15:17:33.443511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.653 [2024-06-11 15:17:33.443599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.653 [2024-06-11 15:17:33.443614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.653 [2024-06-11 15:17:33.443621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.653 [2024-06-11 15:17:33.443626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.653 [2024-06-11 15:17:33.443640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.653 qpair failed and we were unable to recover it. 00:32:14.653 [2024-06-11 15:17:33.453614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.653 [2024-06-11 15:17:33.453707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.653 [2024-06-11 15:17:33.453722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.653 [2024-06-11 15:17:33.453728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.653 [2024-06-11 15:17:33.453733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.653 [2024-06-11 15:17:33.453748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.653 qpair failed and we were unable to recover it. 00:32:14.653 [2024-06-11 15:17:33.463618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.653 [2024-06-11 15:17:33.463718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.653 [2024-06-11 15:17:33.463733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.653 [2024-06-11 15:17:33.463740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.653 [2024-06-11 15:17:33.463747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.653 [2024-06-11 15:17:33.463762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.653 qpair failed and we were unable to recover it. 00:32:14.653 [2024-06-11 15:17:33.473675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.653 [2024-06-11 15:17:33.473763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.653 [2024-06-11 15:17:33.473778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.653 [2024-06-11 15:17:33.473785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.653 [2024-06-11 15:17:33.473790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.653 [2024-06-11 15:17:33.473804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.653 qpair failed and we were unable to recover it. 00:32:14.653 [2024-06-11 15:17:33.483620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.653 [2024-06-11 15:17:33.483721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.653 [2024-06-11 15:17:33.483739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.653 [2024-06-11 15:17:33.483746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.653 [2024-06-11 15:17:33.483751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.653 [2024-06-11 15:17:33.483766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.653 qpair failed and we were unable to recover it. 00:32:14.913 [2024-06-11 15:17:33.493727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.913 [2024-06-11 15:17:33.493818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.913 [2024-06-11 15:17:33.493833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.913 [2024-06-11 15:17:33.493839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.913 [2024-06-11 15:17:33.493845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.913 [2024-06-11 15:17:33.493858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.913 qpair failed and we were unable to recover it. 00:32:14.913 [2024-06-11 15:17:33.503791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.503884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.503900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.503906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.503911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.503925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.513806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.513984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.514000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.514006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.514012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.514031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.523850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.523947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.523962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.523968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.523974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.523991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.533860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.533947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.533962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.533968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.533973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.533987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.543903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.544027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.544043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.544050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.544055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.544070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.553875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.553975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.553990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.553997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.554003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.554017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.563955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.564080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.564101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.564108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.564114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.564128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.573998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.574101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.574117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.574123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.574128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.574143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.584012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.584111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.584127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.584133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.584139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.584153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.594051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.594139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.594154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.594160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.594165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.594179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.604097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.604181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.604196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.604203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.604208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.604222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.614115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.614206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.614222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.614228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.614236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.614250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.624167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.624290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.624307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.914 [2024-06-11 15:17:33.624313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.914 [2024-06-11 15:17:33.624319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.914 [2024-06-11 15:17:33.624334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.914 qpair failed and we were unable to recover it. 00:32:14.914 [2024-06-11 15:17:33.634143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.914 [2024-06-11 15:17:33.634248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.914 [2024-06-11 15:17:33.634264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.634270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.634275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.634290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.644246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.644338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.644354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.644360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.644365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.644379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.654237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.654327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.654342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.654348] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.654353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.654366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.664293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.664390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.664405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.664411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.664417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.664431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.674330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.674418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.674434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.674440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.674445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.674459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.684330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.684421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.684436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.684442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.684448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.684462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.694373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.694465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.694480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.694486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.694492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.694507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.704399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.704494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.704509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.704518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.704523] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.704538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.714405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.714493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.714510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.714517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.714522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.714536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.724367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.724457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.724472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.724478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.724483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.724497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.734467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.734557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.734572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.734578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.734584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.734597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:14.915 [2024-06-11 15:17:33.744522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.915 [2024-06-11 15:17:33.744621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.915 [2024-06-11 15:17:33.744637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.915 [2024-06-11 15:17:33.744643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.915 [2024-06-11 15:17:33.744648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:14.915 [2024-06-11 15:17:33.744662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.915 qpair failed and we were unable to recover it. 00:32:15.176 [2024-06-11 15:17:33.754506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.176 [2024-06-11 15:17:33.754634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.176 [2024-06-11 15:17:33.754649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.176 [2024-06-11 15:17:33.754655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.176 [2024-06-11 15:17:33.754660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.176 [2024-06-11 15:17:33.754674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.176 qpair failed and we were unable to recover it. 00:32:15.176 [2024-06-11 15:17:33.764586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.176 [2024-06-11 15:17:33.764677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.176 [2024-06-11 15:17:33.764693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.176 [2024-06-11 15:17:33.764699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.176 [2024-06-11 15:17:33.764705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.176 [2024-06-11 15:17:33.764719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.176 qpair failed and we were unable to recover it. 00:32:15.176 [2024-06-11 15:17:33.774597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.176 [2024-06-11 15:17:33.774683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.176 [2024-06-11 15:17:33.774698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.176 [2024-06-11 15:17:33.774704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.176 [2024-06-11 15:17:33.774709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.176 [2024-06-11 15:17:33.774723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.176 qpair failed and we were unable to recover it. 00:32:15.176 [2024-06-11 15:17:33.784608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.176 [2024-06-11 15:17:33.784698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.176 [2024-06-11 15:17:33.784713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.176 [2024-06-11 15:17:33.784720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.176 [2024-06-11 15:17:33.784725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.176 [2024-06-11 15:17:33.784739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.176 qpair failed and we were unable to recover it. 00:32:15.176 [2024-06-11 15:17:33.794648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.176 [2024-06-11 15:17:33.794740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.176 [2024-06-11 15:17:33.794755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.176 [2024-06-11 15:17:33.794764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.176 [2024-06-11 15:17:33.794769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.176 [2024-06-11 15:17:33.794784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.176 qpair failed and we were unable to recover it. 00:32:15.176 [2024-06-11 15:17:33.804668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.176 [2024-06-11 15:17:33.804799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.176 [2024-06-11 15:17:33.804815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.176 [2024-06-11 15:17:33.804821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.176 [2024-06-11 15:17:33.804827] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.176 [2024-06-11 15:17:33.804842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.176 qpair failed and we were unable to recover it. 00:32:15.176 [2024-06-11 15:17:33.814701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.176 [2024-06-11 15:17:33.814789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.176 [2024-06-11 15:17:33.814805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.176 [2024-06-11 15:17:33.814811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.176 [2024-06-11 15:17:33.814816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.176 [2024-06-11 15:17:33.814830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.176 qpair failed and we were unable to recover it. 00:32:15.176 [2024-06-11 15:17:33.824678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.176 [2024-06-11 15:17:33.824775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.176 [2024-06-11 15:17:33.824791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.176 [2024-06-11 15:17:33.824797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.176 [2024-06-11 15:17:33.824803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.176 [2024-06-11 15:17:33.824817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.176 qpair failed and we were unable to recover it. 00:32:15.176 [2024-06-11 15:17:33.834773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.176 [2024-06-11 15:17:33.834864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.176 [2024-06-11 15:17:33.834880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.176 [2024-06-11 15:17:33.834886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.176 [2024-06-11 15:17:33.834891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.834906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.844795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.844892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.844908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.844914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.844920] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.844935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.854829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.854944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.854966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.854973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.854979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.854993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.864844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.864933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.864948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.864954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.864960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.864974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.874824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.874912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.874928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.874934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.874939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.874953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.884943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.885038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.885057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.885064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.885069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.885083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.894925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.895017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.895037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.895043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.895049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.895063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.905014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.905113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.905129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.905135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.905140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.905155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.915011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.915105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.915121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.915128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.915133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.915147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.925144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.925322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.925338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.925344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.925349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.925370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.935087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.935180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.935195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.935201] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.935206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.935220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.945143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.945249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.945264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.945270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.945275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.945289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.177 [2024-06-11 15:17:33.955218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.177 [2024-06-11 15:17:33.955313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.177 [2024-06-11 15:17:33.955328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.177 [2024-06-11 15:17:33.955334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.177 [2024-06-11 15:17:33.955339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.177 [2024-06-11 15:17:33.955354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.177 qpair failed and we were unable to recover it. 00:32:15.178 [2024-06-11 15:17:33.965198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.178 [2024-06-11 15:17:33.965288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.178 [2024-06-11 15:17:33.965303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.178 [2024-06-11 15:17:33.965309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.178 [2024-06-11 15:17:33.965315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.178 [2024-06-11 15:17:33.965329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.178 qpair failed and we were unable to recover it. 00:32:15.178 [2024-06-11 15:17:33.975159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.178 [2024-06-11 15:17:33.975284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.178 [2024-06-11 15:17:33.975303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.178 [2024-06-11 15:17:33.975309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.178 [2024-06-11 15:17:33.975315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.178 [2024-06-11 15:17:33.975329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.178 qpair failed and we were unable to recover it. 00:32:15.178 [2024-06-11 15:17:33.985270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.178 [2024-06-11 15:17:33.985522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.178 [2024-06-11 15:17:33.985540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.178 [2024-06-11 15:17:33.985546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.178 [2024-06-11 15:17:33.985552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.178 [2024-06-11 15:17:33.985567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.178 qpair failed and we were unable to recover it. 00:32:15.178 [2024-06-11 15:17:33.995236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.178 [2024-06-11 15:17:33.995365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.178 [2024-06-11 15:17:33.995381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.178 [2024-06-11 15:17:33.995388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.178 [2024-06-11 15:17:33.995393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.178 [2024-06-11 15:17:33.995408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.178 qpair failed and we were unable to recover it. 00:32:15.178 [2024-06-11 15:17:34.005337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.178 [2024-06-11 15:17:34.005474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.178 [2024-06-11 15:17:34.005490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.178 [2024-06-11 15:17:34.005496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.178 [2024-06-11 15:17:34.005502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.178 [2024-06-11 15:17:34.005517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.178 qpair failed and we were unable to recover it. 00:32:15.178 [2024-06-11 15:17:34.015345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.178 [2024-06-11 15:17:34.015439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.178 [2024-06-11 15:17:34.015454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.178 [2024-06-11 15:17:34.015461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.178 [2024-06-11 15:17:34.015466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.178 [2024-06-11 15:17:34.015483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.178 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.025401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.025540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.025556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.025563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.025568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.025582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.035322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.035418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.035434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.035440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.035446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.035461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.045413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.045504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.045520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.045527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.045532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.045547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.055451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.055545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.055561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.055567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.055573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.055588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.065500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.065593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.065612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.065619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.065625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.065640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.075543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.075632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.075648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.075654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.075660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.075675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.085568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.085654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.085670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.085677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.085682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.085696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.095578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.095666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.095682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.095688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.095693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.095707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.105614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.105703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.105718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.105725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.105733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.105747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.115575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.115701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.115718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.115724] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.115730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.115744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.125707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.125811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.125826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.125832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.125838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.125852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.439 qpair failed and we were unable to recover it. 00:32:15.439 [2024-06-11 15:17:34.135697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.439 [2024-06-11 15:17:34.135789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.439 [2024-06-11 15:17:34.135804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.439 [2024-06-11 15:17:34.135810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.439 [2024-06-11 15:17:34.135816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.439 [2024-06-11 15:17:34.135831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.145699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.145792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.145808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.145814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.145819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.145833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.155771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.155874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.155889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.155895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.155901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.155915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.165777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.165866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.165881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.165887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.165893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.165907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.175864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.175958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.175973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.175979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.175985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.175998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.185875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.185970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.185985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.185991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.185997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.186011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.195870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.195961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.195977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.195983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.195991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.196005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.205898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.205996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.206011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.206017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.206022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.206040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.215940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.216031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.216047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.216053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.216059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.216072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.225976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.226072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.226088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.226094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.226100] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.226114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.235937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.236033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.236049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.236055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.236060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.236075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.246047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.246138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.246153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.246159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.246165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.246179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.256063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.256153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.256168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.256174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.256179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.256194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.266124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.266258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.266274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.440 [2024-06-11 15:17:34.266281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.440 [2024-06-11 15:17:34.266286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.440 [2024-06-11 15:17:34.266301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.440 qpair failed and we were unable to recover it. 00:32:15.440 [2024-06-11 15:17:34.276143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.440 [2024-06-11 15:17:34.276237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.440 [2024-06-11 15:17:34.276253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.441 [2024-06-11 15:17:34.276260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.441 [2024-06-11 15:17:34.276265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.441 [2024-06-11 15:17:34.276279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.441 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.286173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.286271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.286287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.286295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.286301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.701 [2024-06-11 15:17:34.286316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.701 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.296189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.296280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.296296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.296303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.296308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.701 [2024-06-11 15:17:34.296323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.701 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.306321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.306419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.306434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.306441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.306446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.701 [2024-06-11 15:17:34.306460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.701 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.316244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.316336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.316351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.316357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.316363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.701 [2024-06-11 15:17:34.316377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.701 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.326295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.326384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.326399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.326405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.326410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.701 [2024-06-11 15:17:34.326425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.701 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.336233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.336345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.336366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.336373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.336378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.701 [2024-06-11 15:17:34.336393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.701 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.346330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.346421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.346436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.346442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.346448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.701 [2024-06-11 15:17:34.346461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.701 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.356381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.356504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.356521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.356527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.356532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.701 [2024-06-11 15:17:34.356547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.701 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.366429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.366522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.366537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.366543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.366549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.701 [2024-06-11 15:17:34.366563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.701 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.376431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.376519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.376537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.376543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.376548] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.701 [2024-06-11 15:17:34.376563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.701 qpair failed and we were unable to recover it. 00:32:15.701 [2024-06-11 15:17:34.386467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.701 [2024-06-11 15:17:34.386556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.701 [2024-06-11 15:17:34.386571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.701 [2024-06-11 15:17:34.386578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.701 [2024-06-11 15:17:34.386583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.386597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.396490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.396585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.396601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.396607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.396613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.396627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.406495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.406589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.406603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.406610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.406615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.406629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.416524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.416619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.416633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.416639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.416645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.416659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.426592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.426728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.426744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.426750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.426756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.426770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.436624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.436734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.436755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.436761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.436767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.436782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.446641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.446730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.446746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.446752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.446758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.446772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.456647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.456743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.456758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.456764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.456769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.456783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.466628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.466720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.466738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.466744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.466749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.466764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.476654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.476745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.476759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.476766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.476771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.476785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.486725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.486816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.486832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.486839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.486844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.486860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.496714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.496802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.496817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.496824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.496829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.496843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.506716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.506811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.506827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.506833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.506838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.506855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.516852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.702 [2024-06-11 15:17:34.516945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.702 [2024-06-11 15:17:34.516960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.702 [2024-06-11 15:17:34.516966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.702 [2024-06-11 15:17:34.516971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.702 [2024-06-11 15:17:34.516986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.702 qpair failed and we were unable to recover it. 00:32:15.702 [2024-06-11 15:17:34.526942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.703 [2024-06-11 15:17:34.527039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.703 [2024-06-11 15:17:34.527054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.703 [2024-06-11 15:17:34.527060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.703 [2024-06-11 15:17:34.527066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.703 [2024-06-11 15:17:34.527080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.703 qpair failed and we were unable to recover it. 00:32:15.703 [2024-06-11 15:17:34.536885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.703 [2024-06-11 15:17:34.536997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.703 [2024-06-11 15:17:34.537012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.703 [2024-06-11 15:17:34.537018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.703 [2024-06-11 15:17:34.537029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.703 [2024-06-11 15:17:34.537044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.703 qpair failed and we were unable to recover it. 00:32:15.963 [2024-06-11 15:17:34.546851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.963 [2024-06-11 15:17:34.546945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.963 [2024-06-11 15:17:34.546960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.963 [2024-06-11 15:17:34.546967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.963 [2024-06-11 15:17:34.546972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.963 [2024-06-11 15:17:34.546986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.963 qpair failed and we were unable to recover it. 00:32:15.963 [2024-06-11 15:17:34.556970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.963 [2024-06-11 15:17:34.557079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.963 [2024-06-11 15:17:34.557101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.963 [2024-06-11 15:17:34.557107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.963 [2024-06-11 15:17:34.557113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.963 [2024-06-11 15:17:34.557127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.963 qpair failed and we were unable to recover it. 00:32:15.963 [2024-06-11 15:17:34.567011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.963 [2024-06-11 15:17:34.567111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.963 [2024-06-11 15:17:34.567126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.963 [2024-06-11 15:17:34.567133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.963 [2024-06-11 15:17:34.567138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.963 [2024-06-11 15:17:34.567153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.963 qpair failed and we were unable to recover it. 00:32:15.963 [2024-06-11 15:17:34.576947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.963 [2024-06-11 15:17:34.577043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.963 [2024-06-11 15:17:34.577059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.963 [2024-06-11 15:17:34.577065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.963 [2024-06-11 15:17:34.577070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.963 [2024-06-11 15:17:34.577085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.963 qpair failed and we were unable to recover it. 00:32:15.963 [2024-06-11 15:17:34.587152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.963 [2024-06-11 15:17:34.587243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.963 [2024-06-11 15:17:34.587258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.963 [2024-06-11 15:17:34.587264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.963 [2024-06-11 15:17:34.587270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.963 [2024-06-11 15:17:34.587284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.963 qpair failed and we were unable to recover it. 00:32:15.963 [2024-06-11 15:17:34.597085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.963 [2024-06-11 15:17:34.597190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.963 [2024-06-11 15:17:34.597205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.963 [2024-06-11 15:17:34.597211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.963 [2024-06-11 15:17:34.597221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.963 [2024-06-11 15:17:34.597236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.963 qpair failed and we were unable to recover it. 00:32:15.963 [2024-06-11 15:17:34.607038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.963 [2024-06-11 15:17:34.607126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.607141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.607148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.607153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.607167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.617148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.617238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.617253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.617259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.617265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.617279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.627148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.627242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.627257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.627263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.627268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.627283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.637136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.637227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.637242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.637248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.637253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.637268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.647240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.647427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.647444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.647450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.647455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.647470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.657334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.657452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.657469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.657475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.657480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.657495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.667206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.667302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.667317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.667324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.667329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.667344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.677314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.677405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.677420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.677426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.677431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.677446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.687342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.687435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.687450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.687457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.687465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.687480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.697379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.697468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.697483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.697491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.697496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.697510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.707338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.707424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.707439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.707445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.707450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.707465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.717384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.717504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.717520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.717526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.717532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.717546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.727493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.727584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.727599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.727606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.727611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.727625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.964 qpair failed and we were unable to recover it. 00:32:15.964 [2024-06-11 15:17:34.737493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.964 [2024-06-11 15:17:34.737613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.964 [2024-06-11 15:17:34.737629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.964 [2024-06-11 15:17:34.737635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.964 [2024-06-11 15:17:34.737640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.964 [2024-06-11 15:17:34.737655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.965 qpair failed and we were unable to recover it. 00:32:15.965 [2024-06-11 15:17:34.747473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.965 [2024-06-11 15:17:34.747570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.965 [2024-06-11 15:17:34.747585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.965 [2024-06-11 15:17:34.747591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.965 [2024-06-11 15:17:34.747597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.965 [2024-06-11 15:17:34.747611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.965 qpair failed and we were unable to recover it. 00:32:15.965 [2024-06-11 15:17:34.757586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.965 [2024-06-11 15:17:34.757680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.965 [2024-06-11 15:17:34.757695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.965 [2024-06-11 15:17:34.757701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.965 [2024-06-11 15:17:34.757706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.965 [2024-06-11 15:17:34.757720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.965 qpair failed and we were unable to recover it. 00:32:15.965 [2024-06-11 15:17:34.767543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.965 [2024-06-11 15:17:34.767640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.965 [2024-06-11 15:17:34.767657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.965 [2024-06-11 15:17:34.767663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.965 [2024-06-11 15:17:34.767668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.965 [2024-06-11 15:17:34.767683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.965 qpair failed and we were unable to recover it. 00:32:15.965 [2024-06-11 15:17:34.777591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.965 [2024-06-11 15:17:34.777682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.965 [2024-06-11 15:17:34.777697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.965 [2024-06-11 15:17:34.777706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.965 [2024-06-11 15:17:34.777711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.965 [2024-06-11 15:17:34.777725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.965 qpair failed and we were unable to recover it. 00:32:15.965 [2024-06-11 15:17:34.787723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.965 [2024-06-11 15:17:34.787859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.965 [2024-06-11 15:17:34.787875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.965 [2024-06-11 15:17:34.787882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.965 [2024-06-11 15:17:34.787888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.965 [2024-06-11 15:17:34.787902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.965 qpair failed and we were unable to recover it. 00:32:15.965 [2024-06-11 15:17:34.797724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.965 [2024-06-11 15:17:34.797810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.965 [2024-06-11 15:17:34.797825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.965 [2024-06-11 15:17:34.797831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.965 [2024-06-11 15:17:34.797836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:15.965 [2024-06-11 15:17:34.797851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.965 qpair failed and we were unable to recover it. 00:32:16.226 [2024-06-11 15:17:34.807733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.226 [2024-06-11 15:17:34.807823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.226 [2024-06-11 15:17:34.807838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.226 [2024-06-11 15:17:34.807844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.226 [2024-06-11 15:17:34.807849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.226 [2024-06-11 15:17:34.807863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.226 qpair failed and we were unable to recover it. 00:32:16.226 [2024-06-11 15:17:34.817792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.226 [2024-06-11 15:17:34.817885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.226 [2024-06-11 15:17:34.817900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.226 [2024-06-11 15:17:34.817906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.226 [2024-06-11 15:17:34.817911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.226 [2024-06-11 15:17:34.817926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.226 qpair failed and we were unable to recover it. 00:32:16.226 [2024-06-11 15:17:34.827832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.226 [2024-06-11 15:17:34.827922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.226 [2024-06-11 15:17:34.827937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.226 [2024-06-11 15:17:34.827943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.226 [2024-06-11 15:17:34.827949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.226 [2024-06-11 15:17:34.827963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.226 qpair failed and we were unable to recover it. 00:32:16.226 [2024-06-11 15:17:34.837837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.226 [2024-06-11 15:17:34.837927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.226 [2024-06-11 15:17:34.837942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.226 [2024-06-11 15:17:34.837948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.226 [2024-06-11 15:17:34.837953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.226 [2024-06-11 15:17:34.837968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.226 qpair failed and we were unable to recover it. 00:32:16.226 [2024-06-11 15:17:34.847870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.226 [2024-06-11 15:17:34.847964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.226 [2024-06-11 15:17:34.847979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.226 [2024-06-11 15:17:34.847985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.226 [2024-06-11 15:17:34.847991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.226 [2024-06-11 15:17:34.848005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.226 qpair failed and we were unable to recover it. 00:32:16.226 [2024-06-11 15:17:34.857874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.226 [2024-06-11 15:17:34.857996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.226 [2024-06-11 15:17:34.858012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.226 [2024-06-11 15:17:34.858018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.226 [2024-06-11 15:17:34.858023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.226 [2024-06-11 15:17:34.858043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.226 qpair failed and we were unable to recover it. 00:32:16.226 [2024-06-11 15:17:34.867903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.226 [2024-06-11 15:17:34.867997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.226 [2024-06-11 15:17:34.868012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.226 [2024-06-11 15:17:34.868021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.226 [2024-06-11 15:17:34.868031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.226 [2024-06-11 15:17:34.868046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.226 qpair failed and we were unable to recover it. 00:32:16.226 [2024-06-11 15:17:34.877926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.226 [2024-06-11 15:17:34.878018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.226 [2024-06-11 15:17:34.878038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.226 [2024-06-11 15:17:34.878045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.226 [2024-06-11 15:17:34.878050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.226 [2024-06-11 15:17:34.878064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.226 qpair failed and we were unable to recover it. 00:32:16.226 [2024-06-11 15:17:34.887951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.226 [2024-06-11 15:17:34.888049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.226 [2024-06-11 15:17:34.888064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.226 [2024-06-11 15:17:34.888070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.226 [2024-06-11 15:17:34.888077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.226 [2024-06-11 15:17:34.888092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.226 qpair failed and we were unable to recover it. 00:32:16.226 [2024-06-11 15:17:34.897989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.226 [2024-06-11 15:17:34.898082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.226 [2024-06-11 15:17:34.898096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.226 [2024-06-11 15:17:34.898103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.898108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.898122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:34.908063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:34.908154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:34.908170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:34.908176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.908181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.908196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:34.918040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:34.918132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:34.918148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:34.918154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.918160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.918174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:34.928077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:34.928170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:34.928186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:34.928194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.928200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.928215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:34.938072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:34.938163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:34.938179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:34.938187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.938193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.938208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:34.948251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:34.948405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:34.948421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:34.948429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.948435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.948450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:34.958223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:34.958313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:34.958332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:34.958340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.958346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.958362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:34.968231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:34.968321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:34.968338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:34.968346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.968352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.968367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:34.978280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:34.978372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:34.978388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:34.978395] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.978402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.978418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:34.988270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:34.988356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:34.988373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:34.988380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.988386] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.988401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:34.998273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:34.998363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:34.998379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:34.998387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:34.998393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:34.998412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:35.008324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:35.008410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:35.008427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:35.008434] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:35.008442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:35.008457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:35.018339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:35.018431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:35.018447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:35.018453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:35.018460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:35.018475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:35.028347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.227 [2024-06-11 15:17:35.028440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.227 [2024-06-11 15:17:35.028457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.227 [2024-06-11 15:17:35.028464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.227 [2024-06-11 15:17:35.028470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.227 [2024-06-11 15:17:35.028485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.227 qpair failed and we were unable to recover it. 00:32:16.227 [2024-06-11 15:17:35.038394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.228 [2024-06-11 15:17:35.038483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.228 [2024-06-11 15:17:35.038499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.228 [2024-06-11 15:17:35.038507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.228 [2024-06-11 15:17:35.038513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.228 [2024-06-11 15:17:35.038527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.228 qpair failed and we were unable to recover it. 00:32:16.228 [2024-06-11 15:17:35.048482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.228 [2024-06-11 15:17:35.048620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.228 [2024-06-11 15:17:35.048639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.228 [2024-06-11 15:17:35.048646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.228 [2024-06-11 15:17:35.048653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.228 [2024-06-11 15:17:35.048668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.228 qpair failed and we were unable to recover it. 00:32:16.228 [2024-06-11 15:17:35.058467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.228 [2024-06-11 15:17:35.058559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.228 [2024-06-11 15:17:35.058575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.228 [2024-06-11 15:17:35.058583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.228 [2024-06-11 15:17:35.058591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.228 [2024-06-11 15:17:35.058607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.228 qpair failed and we were unable to recover it. 00:32:16.488 [2024-06-11 15:17:35.068509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.488 [2024-06-11 15:17:35.068603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.488 [2024-06-11 15:17:35.068619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.488 [2024-06-11 15:17:35.068627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.488 [2024-06-11 15:17:35.068634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.488 [2024-06-11 15:17:35.068649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.488 qpair failed and we were unable to recover it. 00:32:16.488 [2024-06-11 15:17:35.078525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.488 [2024-06-11 15:17:35.078621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.488 [2024-06-11 15:17:35.078637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.488 [2024-06-11 15:17:35.078645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.488 [2024-06-11 15:17:35.078651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.488 [2024-06-11 15:17:35.078666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.488 qpair failed and we were unable to recover it. 00:32:16.488 [2024-06-11 15:17:35.088573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.488 [2024-06-11 15:17:35.088660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.488 [2024-06-11 15:17:35.088676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.488 [2024-06-11 15:17:35.088683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.488 [2024-06-11 15:17:35.088690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.488 [2024-06-11 15:17:35.088708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.488 qpair failed and we were unable to recover it. 00:32:16.488 [2024-06-11 15:17:35.098614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.488 [2024-06-11 15:17:35.098723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.488 [2024-06-11 15:17:35.098739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.488 [2024-06-11 15:17:35.098747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.098754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.098769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.108623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.108719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.108736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.108744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.108750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.108766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.118624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.118749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.118766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.118774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.118781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.118796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.128649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.128740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.128755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.128763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.128770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.128785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.138633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.138733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.138749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.138756] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.138763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.138779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.148641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.148740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.148756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.148764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.148771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.148786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.158805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.158895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.158911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.158919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.158926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.158941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.168798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.168905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.168921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.168928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.168935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.168950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.178809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.178902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.178918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.178926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.178936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.178951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.188839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.188930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.188947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.188954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.188961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.188976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.198882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.198971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.198988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.198995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.199002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.199018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.208917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.209014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.209035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.209043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.209049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.209065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.218925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.219018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.219039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.219048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.219054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.219069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.229011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.489 [2024-06-11 15:17:35.229115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.489 [2024-06-11 15:17:35.229132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.489 [2024-06-11 15:17:35.229140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.489 [2024-06-11 15:17:35.229146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.489 [2024-06-11 15:17:35.229162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.489 qpair failed and we were unable to recover it. 00:32:16.489 [2024-06-11 15:17:35.239004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.490 [2024-06-11 15:17:35.239142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.490 [2024-06-11 15:17:35.239159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.490 [2024-06-11 15:17:35.239167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.490 [2024-06-11 15:17:35.239173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.490 [2024-06-11 15:17:35.239189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.490 qpair failed and we were unable to recover it. 00:32:16.490 [2024-06-11 15:17:35.249090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.490 [2024-06-11 15:17:35.249221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.490 [2024-06-11 15:17:35.249238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.490 [2024-06-11 15:17:35.249246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.490 [2024-06-11 15:17:35.249252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.490 [2024-06-11 15:17:35.249268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.490 qpair failed and we were unable to recover it. 00:32:16.490 [2024-06-11 15:17:35.259051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.490 [2024-06-11 15:17:35.259141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.490 [2024-06-11 15:17:35.259158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.490 [2024-06-11 15:17:35.259165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.490 [2024-06-11 15:17:35.259172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.490 [2024-06-11 15:17:35.259187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.490 qpair failed and we were unable to recover it. 00:32:16.490 [2024-06-11 15:17:35.269072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.490 [2024-06-11 15:17:35.269162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.490 [2024-06-11 15:17:35.269178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.490 [2024-06-11 15:17:35.269189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.490 [2024-06-11 15:17:35.269195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.490 [2024-06-11 15:17:35.269211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.490 qpair failed and we were unable to recover it. 00:32:16.490 [2024-06-11 15:17:35.279100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.490 [2024-06-11 15:17:35.279194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.490 [2024-06-11 15:17:35.279210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.490 [2024-06-11 15:17:35.279218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.490 [2024-06-11 15:17:35.279225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.490 [2024-06-11 15:17:35.279239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.490 qpair failed and we were unable to recover it. 00:32:16.490 [2024-06-11 15:17:35.289157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.490 [2024-06-11 15:17:35.289249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.490 [2024-06-11 15:17:35.289265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.490 [2024-06-11 15:17:35.289273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.490 [2024-06-11 15:17:35.289280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.490 [2024-06-11 15:17:35.289296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.490 qpair failed and we were unable to recover it. 00:32:16.490 [2024-06-11 15:17:35.299164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.490 [2024-06-11 15:17:35.299259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.490 [2024-06-11 15:17:35.299275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.490 [2024-06-11 15:17:35.299283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.490 [2024-06-11 15:17:35.299289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.490 [2024-06-11 15:17:35.299304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.490 qpair failed and we were unable to recover it. 00:32:16.490 [2024-06-11 15:17:35.309233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.490 [2024-06-11 15:17:35.309342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.490 [2024-06-11 15:17:35.309359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.490 [2024-06-11 15:17:35.309367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.490 [2024-06-11 15:17:35.309373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.490 [2024-06-11 15:17:35.309388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.490 qpair failed and we were unable to recover it. 00:32:16.490 [2024-06-11 15:17:35.319232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.490 [2024-06-11 15:17:35.319358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.490 [2024-06-11 15:17:35.319373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.490 [2024-06-11 15:17:35.319380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.490 [2024-06-11 15:17:35.319387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.490 [2024-06-11 15:17:35.319401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.490 qpair failed and we were unable to recover it. 00:32:16.751 [2024-06-11 15:17:35.329257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.751 [2024-06-11 15:17:35.329345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.751 [2024-06-11 15:17:35.329361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.751 [2024-06-11 15:17:35.329368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.751 [2024-06-11 15:17:35.329375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.751 [2024-06-11 15:17:35.329390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.751 qpair failed and we were unable to recover it. 00:32:16.751 [2024-06-11 15:17:35.339274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.751 [2024-06-11 15:17:35.339365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.751 [2024-06-11 15:17:35.339381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.751 [2024-06-11 15:17:35.339388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.751 [2024-06-11 15:17:35.339394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.751 [2024-06-11 15:17:35.339409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.751 qpair failed and we were unable to recover it. 00:32:16.751 [2024-06-11 15:17:35.349336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.751 [2024-06-11 15:17:35.349433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.751 [2024-06-11 15:17:35.349449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.751 [2024-06-11 15:17:35.349455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.751 [2024-06-11 15:17:35.349461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.751 [2024-06-11 15:17:35.349475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.751 qpair failed and we were unable to recover it. 00:32:16.751 [2024-06-11 15:17:35.359380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.751 [2024-06-11 15:17:35.359473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.751 [2024-06-11 15:17:35.359489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.751 [2024-06-11 15:17:35.359501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.751 [2024-06-11 15:17:35.359507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.751 [2024-06-11 15:17:35.359522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.751 qpair failed and we were unable to recover it. 00:32:16.751 [2024-06-11 15:17:35.369399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.751 [2024-06-11 15:17:35.369495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.751 [2024-06-11 15:17:35.369512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.751 [2024-06-11 15:17:35.369519] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.751 [2024-06-11 15:17:35.369525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.751 [2024-06-11 15:17:35.369539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.751 qpair failed and we were unable to recover it. 00:32:16.751 [2024-06-11 15:17:35.379416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.751 [2024-06-11 15:17:35.379509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.751 [2024-06-11 15:17:35.379525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.751 [2024-06-11 15:17:35.379532] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.751 [2024-06-11 15:17:35.379537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.379551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.389372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.389469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.389485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.389492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.389497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.389512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.399422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.399512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.399528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.399535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.399541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.399555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.409496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.409585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.409600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.409607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.409613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.409627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.419580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.419693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.419708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.419715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.419720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.419735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.429589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.429698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.429713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.429720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.429726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.429740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.439622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.439712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.439728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.439734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.439740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.439754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.449649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.449764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.449783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.449790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.449795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.449809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.459654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.459748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.459764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.459771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.459777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.459792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.469686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.469782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.469797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.469804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.469809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.469824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.479741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.479837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.479853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.479860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.479865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.479879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.489771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.489876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.489891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.489898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.489904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.489921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.499826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.752 [2024-06-11 15:17:35.499935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.752 [2024-06-11 15:17:35.499951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.752 [2024-06-11 15:17:35.499959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.752 [2024-06-11 15:17:35.499965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.752 [2024-06-11 15:17:35.499980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.752 qpair failed and we were unable to recover it. 00:32:16.752 [2024-06-11 15:17:35.509803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.753 [2024-06-11 15:17:35.509902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.753 [2024-06-11 15:17:35.509917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.753 [2024-06-11 15:17:35.509924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.753 [2024-06-11 15:17:35.509930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.753 [2024-06-11 15:17:35.509944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.753 qpair failed and we were unable to recover it. 00:32:16.753 [2024-06-11 15:17:35.519860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.753 [2024-06-11 15:17:35.519951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.753 [2024-06-11 15:17:35.519967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.753 [2024-06-11 15:17:35.519974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.753 [2024-06-11 15:17:35.519979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.753 [2024-06-11 15:17:35.519994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.753 qpair failed and we were unable to recover it. 00:32:16.753 [2024-06-11 15:17:35.529890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.753 [2024-06-11 15:17:35.529976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.753 [2024-06-11 15:17:35.529992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.753 [2024-06-11 15:17:35.529999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.753 [2024-06-11 15:17:35.530004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.753 [2024-06-11 15:17:35.530020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.753 qpair failed and we were unable to recover it. 00:32:16.753 [2024-06-11 15:17:35.539920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.753 [2024-06-11 15:17:35.540015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.753 [2024-06-11 15:17:35.540037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.753 [2024-06-11 15:17:35.540045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.753 [2024-06-11 15:17:35.540050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.753 [2024-06-11 15:17:35.540065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.753 qpair failed and we were unable to recover it. 00:32:16.753 [2024-06-11 15:17:35.549933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.753 [2024-06-11 15:17:35.550032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.753 [2024-06-11 15:17:35.550047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.753 [2024-06-11 15:17:35.550055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.753 [2024-06-11 15:17:35.550060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.753 [2024-06-11 15:17:35.550075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.753 qpair failed and we were unable to recover it. 00:32:16.753 [2024-06-11 15:17:35.559979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.753 [2024-06-11 15:17:35.560073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.753 [2024-06-11 15:17:35.560089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.753 [2024-06-11 15:17:35.560096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.753 [2024-06-11 15:17:35.560101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.753 [2024-06-11 15:17:35.560115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.753 qpair failed and we were unable to recover it. 00:32:16.753 [2024-06-11 15:17:35.570017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.753 [2024-06-11 15:17:35.570111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.753 [2024-06-11 15:17:35.570126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.753 [2024-06-11 15:17:35.570133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.753 [2024-06-11 15:17:35.570139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.753 [2024-06-11 15:17:35.570154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.753 qpair failed and we were unable to recover it. 00:32:16.753 [2024-06-11 15:17:35.580037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.753 [2024-06-11 15:17:35.580126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.753 [2024-06-11 15:17:35.580142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.753 [2024-06-11 15:17:35.580149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.753 [2024-06-11 15:17:35.580154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.753 [2024-06-11 15:17:35.580172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.753 qpair failed and we were unable to recover it. 00:32:16.753 [2024-06-11 15:17:35.590078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.753 [2024-06-11 15:17:35.590209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.753 [2024-06-11 15:17:35.590225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.753 [2024-06-11 15:17:35.590232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.753 [2024-06-11 15:17:35.590237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:16.753 [2024-06-11 15:17:35.590252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.753 qpair failed and we were unable to recover it. 00:32:17.014 [2024-06-11 15:17:35.600105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.014 [2024-06-11 15:17:35.600198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.014 [2024-06-11 15:17:35.600214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.014 [2024-06-11 15:17:35.600221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.014 [2024-06-11 15:17:35.600226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.014 [2024-06-11 15:17:35.600241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.014 qpair failed and we were unable to recover it. 00:32:17.014 [2024-06-11 15:17:35.610117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.014 [2024-06-11 15:17:35.610205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.014 [2024-06-11 15:17:35.610220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.014 [2024-06-11 15:17:35.610227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.014 [2024-06-11 15:17:35.610233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.014 [2024-06-11 15:17:35.610247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.014 qpair failed and we were unable to recover it. 00:32:17.014 [2024-06-11 15:17:35.620178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.014 [2024-06-11 15:17:35.620332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.014 [2024-06-11 15:17:35.620347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.014 [2024-06-11 15:17:35.620354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.014 [2024-06-11 15:17:35.620359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.014 [2024-06-11 15:17:35.620374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.014 qpair failed and we were unable to recover it. 00:32:17.014 [2024-06-11 15:17:35.630232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.014 [2024-06-11 15:17:35.630324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.014 [2024-06-11 15:17:35.630342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.014 [2024-06-11 15:17:35.630349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.014 [2024-06-11 15:17:35.630354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.014 [2024-06-11 15:17:35.630369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.014 qpair failed and we were unable to recover it. 00:32:17.014 [2024-06-11 15:17:35.640225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.014 [2024-06-11 15:17:35.640313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.014 [2024-06-11 15:17:35.640328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.014 [2024-06-11 15:17:35.640335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.014 [2024-06-11 15:17:35.640341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.014 [2024-06-11 15:17:35.640356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.014 qpair failed and we were unable to recover it. 00:32:17.014 [2024-06-11 15:17:35.650337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.014 [2024-06-11 15:17:35.650429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.014 [2024-06-11 15:17:35.650445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.014 [2024-06-11 15:17:35.650451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.014 [2024-06-11 15:17:35.650457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.014 [2024-06-11 15:17:35.650471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.014 qpair failed and we were unable to recover it. 00:32:17.014 [2024-06-11 15:17:35.660329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.014 [2024-06-11 15:17:35.660422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.014 [2024-06-11 15:17:35.660437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.014 [2024-06-11 15:17:35.660444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.014 [2024-06-11 15:17:35.660450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.014 [2024-06-11 15:17:35.660464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.014 qpair failed and we were unable to recover it. 00:32:17.014 [2024-06-11 15:17:35.670319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.014 [2024-06-11 15:17:35.670408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.014 [2024-06-11 15:17:35.670423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.670431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.670439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.670454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.680346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.680440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.680455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.680462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.680468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.680483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.690381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.690474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.690489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.690496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.690502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.690517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.700382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.700479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.700495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.700501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.700507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.700522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.710353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.710473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.710488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.710496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.710502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.710518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.720440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.720537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.720552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.720559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.720565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.720579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.730513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.730639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.730655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.730662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.730668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.730683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.740519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.740616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.740631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.740637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.740643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.740657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.750529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.750626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.750642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.750650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.750657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.750672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.760624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.760760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.760775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.760782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.760790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.760805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.770605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.770695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.770710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.770717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.015 [2024-06-11 15:17:35.770723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.015 [2024-06-11 15:17:35.770737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.015 qpair failed and we were unable to recover it. 00:32:17.015 [2024-06-11 15:17:35.780598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.015 [2024-06-11 15:17:35.780702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.015 [2024-06-11 15:17:35.780718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.015 [2024-06-11 15:17:35.780725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.016 [2024-06-11 15:17:35.780731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.016 [2024-06-11 15:17:35.780745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.016 qpair failed and we were unable to recover it. 00:32:17.016 [2024-06-11 15:17:35.790713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.016 [2024-06-11 15:17:35.790856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.016 [2024-06-11 15:17:35.790871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.016 [2024-06-11 15:17:35.790878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.016 [2024-06-11 15:17:35.790884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.016 [2024-06-11 15:17:35.790898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.016 qpair failed and we were unable to recover it. 00:32:17.016 [2024-06-11 15:17:35.800623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.016 [2024-06-11 15:17:35.800715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.016 [2024-06-11 15:17:35.800731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.016 [2024-06-11 15:17:35.800738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.016 [2024-06-11 15:17:35.800743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.016 [2024-06-11 15:17:35.800758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.016 qpair failed and we were unable to recover it. 00:32:17.016 [2024-06-11 15:17:35.810671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.016 [2024-06-11 15:17:35.810765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.016 [2024-06-11 15:17:35.810781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.016 [2024-06-11 15:17:35.810787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.016 [2024-06-11 15:17:35.810793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.016 [2024-06-11 15:17:35.810807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.016 qpair failed and we were unable to recover it. 00:32:17.016 [2024-06-11 15:17:35.820757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.016 [2024-06-11 15:17:35.820862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.016 [2024-06-11 15:17:35.820877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.016 [2024-06-11 15:17:35.820884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.016 [2024-06-11 15:17:35.820889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.016 [2024-06-11 15:17:35.820903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.016 qpair failed and we were unable to recover it. 00:32:17.016 [2024-06-11 15:17:35.830823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.016 [2024-06-11 15:17:35.830923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.016 [2024-06-11 15:17:35.830939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.016 [2024-06-11 15:17:35.830945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.016 [2024-06-11 15:17:35.830951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.016 [2024-06-11 15:17:35.830965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.016 qpair failed and we were unable to recover it. 00:32:17.016 [2024-06-11 15:17:35.840833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.016 [2024-06-11 15:17:35.840933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.016 [2024-06-11 15:17:35.840948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.016 [2024-06-11 15:17:35.840955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.016 [2024-06-11 15:17:35.840961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.016 [2024-06-11 15:17:35.840975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.016 qpair failed and we were unable to recover it. 00:32:17.016 [2024-06-11 15:17:35.850864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.016 [2024-06-11 15:17:35.850961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.016 [2024-06-11 15:17:35.850976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.016 [2024-06-11 15:17:35.850985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.016 [2024-06-11 15:17:35.850991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.016 [2024-06-11 15:17:35.851005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.016 qpair failed and we were unable to recover it. 00:32:17.277 [2024-06-11 15:17:35.860886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.277 [2024-06-11 15:17:35.860974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.277 [2024-06-11 15:17:35.860990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.277 [2024-06-11 15:17:35.860997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.277 [2024-06-11 15:17:35.861003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.277 [2024-06-11 15:17:35.861018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.277 qpair failed and we were unable to recover it. 00:32:17.277 [2024-06-11 15:17:35.870930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.277 [2024-06-11 15:17:35.871036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.277 [2024-06-11 15:17:35.871052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.277 [2024-06-11 15:17:35.871059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.277 [2024-06-11 15:17:35.871065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.277 [2024-06-11 15:17:35.871080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.277 qpair failed and we were unable to recover it. 00:32:17.277 [2024-06-11 15:17:35.880872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.277 [2024-06-11 15:17:35.880962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.277 [2024-06-11 15:17:35.880978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.277 [2024-06-11 15:17:35.880985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.277 [2024-06-11 15:17:35.880990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.277 [2024-06-11 15:17:35.881004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.277 qpair failed and we were unable to recover it. 00:32:17.277 [2024-06-11 15:17:35.890936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.277 [2024-06-11 15:17:35.891029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.277 [2024-06-11 15:17:35.891045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.277 [2024-06-11 15:17:35.891052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.277 [2024-06-11 15:17:35.891058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.277 [2024-06-11 15:17:35.891073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.277 qpair failed and we were unable to recover it. 00:32:17.277 [2024-06-11 15:17:35.901175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.277 [2024-06-11 15:17:35.901270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.277 [2024-06-11 15:17:35.901285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.277 [2024-06-11 15:17:35.901292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.277 [2024-06-11 15:17:35.901297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.277 [2024-06-11 15:17:35.901312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.277 qpair failed and we were unable to recover it. 00:32:17.277 [2024-06-11 15:17:35.911044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.277 [2024-06-11 15:17:35.911142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.277 [2024-06-11 15:17:35.911157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.277 [2024-06-11 15:17:35.911165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.277 [2024-06-11 15:17:35.911171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.277 [2024-06-11 15:17:35.911185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.277 qpair failed and we were unable to recover it. 00:32:17.277 [2024-06-11 15:17:35.921100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:35.921192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:35.921207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:35.921214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:35.921219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:35.921235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:35.931065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:35.931155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:35.931170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:35.931177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:35.931182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:35.931197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:35.941104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:35.941200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:35.941218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:35.941226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:35.941231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:35.941245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:35.951140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:35.951234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:35.951249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:35.951256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:35.951262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:35.951277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:35.961122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:35.961215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:35.961230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:35.961237] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:35.961244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:35.961258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:35.971228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:35.971359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:35.971375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:35.971382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:35.971388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:35.971402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:35.981220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:35.981312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:35.981328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:35.981334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:35.981340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:35.981355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:35.991291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:35.991384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:35.991399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:35.991406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:35.991413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:35.991427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:36.001283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:36.001368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:36.001384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:36.001393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:36.001400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:36.001416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:36.011319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:36.011418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:36.011436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:36.011445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:36.011453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:36.011469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:36.021357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:36.021451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:36.021466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:36.021473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:36.021478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:36.021493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:36.031377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:36.031477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:36.031496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:36.031503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:36.031508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:36.031523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:36.041343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:36.041447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:36.041463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:36.041470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:36.041475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.278 [2024-06-11 15:17:36.041489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.278 qpair failed and we were unable to recover it. 00:32:17.278 [2024-06-11 15:17:36.051429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.278 [2024-06-11 15:17:36.051517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.278 [2024-06-11 15:17:36.051533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.278 [2024-06-11 15:17:36.051540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.278 [2024-06-11 15:17:36.051545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.279 [2024-06-11 15:17:36.051560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.279 qpair failed and we were unable to recover it. 00:32:17.279 [2024-06-11 15:17:36.061453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.279 [2024-06-11 15:17:36.061544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.279 [2024-06-11 15:17:36.061560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.279 [2024-06-11 15:17:36.061567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.279 [2024-06-11 15:17:36.061573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.279 [2024-06-11 15:17:36.061588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.279 qpair failed and we were unable to recover it. 00:32:17.279 [2024-06-11 15:17:36.071412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.279 [2024-06-11 15:17:36.071500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.279 [2024-06-11 15:17:36.071516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.279 [2024-06-11 15:17:36.071523] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.279 [2024-06-11 15:17:36.071529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.279 [2024-06-11 15:17:36.071547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.279 qpair failed and we were unable to recover it. 00:32:17.279 [2024-06-11 15:17:36.081503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.279 [2024-06-11 15:17:36.081598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.279 [2024-06-11 15:17:36.081614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.279 [2024-06-11 15:17:36.081621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.279 [2024-06-11 15:17:36.081627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.279 [2024-06-11 15:17:36.081642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.279 qpair failed and we were unable to recover it. 00:32:17.279 [2024-06-11 15:17:36.091461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.279 [2024-06-11 15:17:36.091581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.279 [2024-06-11 15:17:36.091596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.279 [2024-06-11 15:17:36.091603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.279 [2024-06-11 15:17:36.091609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.279 [2024-06-11 15:17:36.091623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.279 qpair failed and we were unable to recover it. 00:32:17.279 [2024-06-11 15:17:36.101572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.279 [2024-06-11 15:17:36.101666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.279 [2024-06-11 15:17:36.101681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.279 [2024-06-11 15:17:36.101688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.279 [2024-06-11 15:17:36.101694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.279 [2024-06-11 15:17:36.101709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.279 qpair failed and we were unable to recover it. 00:32:17.279 [2024-06-11 15:17:36.111638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.279 [2024-06-11 15:17:36.111735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.279 [2024-06-11 15:17:36.111751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.279 [2024-06-11 15:17:36.111758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.279 [2024-06-11 15:17:36.111764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.279 [2024-06-11 15:17:36.111778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.279 qpair failed and we were unable to recover it. 00:32:17.540 [2024-06-11 15:17:36.121684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.540 [2024-06-11 15:17:36.121821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.540 [2024-06-11 15:17:36.121840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.540 [2024-06-11 15:17:36.121847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.540 [2024-06-11 15:17:36.121852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.540 [2024-06-11 15:17:36.121866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.540 qpair failed and we were unable to recover it. 00:32:17.540 [2024-06-11 15:17:36.131680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.540 [2024-06-11 15:17:36.131863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.540 [2024-06-11 15:17:36.131878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.540 [2024-06-11 15:17:36.131885] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.540 [2024-06-11 15:17:36.131891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.540 [2024-06-11 15:17:36.131906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.540 qpair failed and we were unable to recover it. 00:32:17.540 [2024-06-11 15:17:36.141629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.540 [2024-06-11 15:17:36.141723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.540 [2024-06-11 15:17:36.141738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.540 [2024-06-11 15:17:36.141745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.540 [2024-06-11 15:17:36.141751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.540 [2024-06-11 15:17:36.141765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.540 qpair failed and we were unable to recover it. 00:32:17.540 [2024-06-11 15:17:36.151661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.540 [2024-06-11 15:17:36.151751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.540 [2024-06-11 15:17:36.151767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.540 [2024-06-11 15:17:36.151775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.540 [2024-06-11 15:17:36.151780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.540 [2024-06-11 15:17:36.151794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.540 qpair failed and we were unable to recover it. 00:32:17.540 [2024-06-11 15:17:36.161708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.540 [2024-06-11 15:17:36.161800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.540 [2024-06-11 15:17:36.161816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.540 [2024-06-11 15:17:36.161823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.540 [2024-06-11 15:17:36.161831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.540 [2024-06-11 15:17:36.161845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.540 qpair failed and we were unable to recover it. 00:32:17.540 [2024-06-11 15:17:36.171718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.540 [2024-06-11 15:17:36.171812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.540 [2024-06-11 15:17:36.171828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.540 [2024-06-11 15:17:36.171835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.540 [2024-06-11 15:17:36.171841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.540 [2024-06-11 15:17:36.171855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.540 qpair failed and we were unable to recover it. 00:32:17.540 [2024-06-11 15:17:36.181830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.540 [2024-06-11 15:17:36.181921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.540 [2024-06-11 15:17:36.181937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.540 [2024-06-11 15:17:36.181944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.540 [2024-06-11 15:17:36.181950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.540 [2024-06-11 15:17:36.181965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.540 qpair failed and we were unable to recover it. 00:32:17.540 [2024-06-11 15:17:36.191871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.540 [2024-06-11 15:17:36.191960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.540 [2024-06-11 15:17:36.191976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.540 [2024-06-11 15:17:36.191983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.540 [2024-06-11 15:17:36.191988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.540 [2024-06-11 15:17:36.192003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.540 qpair failed and we were unable to recover it. 00:32:17.540 [2024-06-11 15:17:36.201847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.201943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.201958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.201965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.201970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.201985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.211836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.211929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.211944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.211951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.211957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.211971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.221886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.221978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.221993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.222000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.222006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.222021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.231920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.232014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.232035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.232043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.232049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.232064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.241919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.242014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.242035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.242042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.242048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.242063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.251966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.252125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.252141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.252148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.252156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.252171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.261998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.262096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.262113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.262120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.262125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.262140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.272117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.272211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.272226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.272233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.272239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.272254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.282170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.282260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.282276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.282283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.282288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.282304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.292191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.292316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.292332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.292339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.292344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.292359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.302174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.302268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.302284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.302291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.302296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.302311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.312128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.312226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.312241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.312248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.312254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.312268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.322159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.322251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.322266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.322273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.541 [2024-06-11 15:17:36.322279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.541 [2024-06-11 15:17:36.322293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.541 qpair failed and we were unable to recover it. 00:32:17.541 [2024-06-11 15:17:36.332192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.541 [2024-06-11 15:17:36.332288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.541 [2024-06-11 15:17:36.332303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.541 [2024-06-11 15:17:36.332310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.542 [2024-06-11 15:17:36.332315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.542 [2024-06-11 15:17:36.332330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.542 qpair failed and we were unable to recover it. 00:32:17.542 [2024-06-11 15:17:36.342322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.542 [2024-06-11 15:17:36.342418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.542 [2024-06-11 15:17:36.342434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.542 [2024-06-11 15:17:36.342444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.542 [2024-06-11 15:17:36.342450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.542 [2024-06-11 15:17:36.342465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.542 qpair failed and we were unable to recover it. 00:32:17.542 [2024-06-11 15:17:36.352255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.542 [2024-06-11 15:17:36.352354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.542 [2024-06-11 15:17:36.352369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.542 [2024-06-11 15:17:36.352376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.542 [2024-06-11 15:17:36.352382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.542 [2024-06-11 15:17:36.352397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.542 qpair failed and we were unable to recover it. 00:32:17.542 [2024-06-11 15:17:36.362287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.542 [2024-06-11 15:17:36.362377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.542 [2024-06-11 15:17:36.362393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.542 [2024-06-11 15:17:36.362400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.542 [2024-06-11 15:17:36.362405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.542 [2024-06-11 15:17:36.362420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.542 qpair failed and we were unable to recover it. 00:32:17.542 [2024-06-11 15:17:36.372389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.542 [2024-06-11 15:17:36.372478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.542 [2024-06-11 15:17:36.372493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.542 [2024-06-11 15:17:36.372500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.542 [2024-06-11 15:17:36.372505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.542 [2024-06-11 15:17:36.372519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.542 qpair failed and we were unable to recover it. 00:32:17.802 [2024-06-11 15:17:36.382336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.802 [2024-06-11 15:17:36.382430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.802 [2024-06-11 15:17:36.382445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.802 [2024-06-11 15:17:36.382452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.802 [2024-06-11 15:17:36.382458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.802 [2024-06-11 15:17:36.382473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.802 qpair failed and we were unable to recover it. 00:32:17.802 [2024-06-11 15:17:36.392363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.802 [2024-06-11 15:17:36.392468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.802 [2024-06-11 15:17:36.392483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.802 [2024-06-11 15:17:36.392491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.802 [2024-06-11 15:17:36.392497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.802 [2024-06-11 15:17:36.392511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.802 qpair failed and we were unable to recover it. 00:32:17.802 [2024-06-11 15:17:36.402456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.802 [2024-06-11 15:17:36.402551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.802 [2024-06-11 15:17:36.402566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.802 [2024-06-11 15:17:36.402573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.802 [2024-06-11 15:17:36.402578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.802 [2024-06-11 15:17:36.402593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.802 qpair failed and we were unable to recover it. 00:32:17.802 [2024-06-11 15:17:36.412474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.802 [2024-06-11 15:17:36.412566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.802 [2024-06-11 15:17:36.412582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.802 [2024-06-11 15:17:36.412589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.802 [2024-06-11 15:17:36.412595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.802 [2024-06-11 15:17:36.412609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.802 qpair failed and we were unable to recover it. 00:32:17.802 [2024-06-11 15:17:36.422555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.802 [2024-06-11 15:17:36.422671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.802 [2024-06-11 15:17:36.422686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.802 [2024-06-11 15:17:36.422692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.802 [2024-06-11 15:17:36.422698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.422713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.432499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.432591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.432607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.432617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.432622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.432636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.442575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.442669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.442685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.442692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.442697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.442712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.452618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.452711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.452727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.452734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.452739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.452754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.462597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.462689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.462704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.462711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.462717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.462732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.472613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.472707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.472722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.472730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.472736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.472750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.482726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.482909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.482924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.482932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.482938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.482952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.492757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.492849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.492865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.492872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.492878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.492892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.502780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.502871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.502887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.502894] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.502900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.502914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.512899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.512994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.513012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.513019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.513031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.513047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.522848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.522943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.522961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.522968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.522973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.522987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.532865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.532962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.532977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.532984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.532989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.533003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.542864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.803 [2024-06-11 15:17:36.542958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.803 [2024-06-11 15:17:36.542973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.803 [2024-06-11 15:17:36.542980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.803 [2024-06-11 15:17:36.542986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.803 [2024-06-11 15:17:36.543000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.803 qpair failed and we were unable to recover it. 00:32:17.803 [2024-06-11 15:17:36.552950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.804 [2024-06-11 15:17:36.553048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.804 [2024-06-11 15:17:36.553064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.804 [2024-06-11 15:17:36.553071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.804 [2024-06-11 15:17:36.553077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.804 [2024-06-11 15:17:36.553091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.804 qpair failed and we were unable to recover it. 00:32:17.804 [2024-06-11 15:17:36.562990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.804 [2024-06-11 15:17:36.563090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.804 [2024-06-11 15:17:36.563106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.804 [2024-06-11 15:17:36.563113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.804 [2024-06-11 15:17:36.563118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.804 [2024-06-11 15:17:36.563136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.804 qpair failed and we were unable to recover it. 00:32:17.804 [2024-06-11 15:17:36.573000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.804 [2024-06-11 15:17:36.573098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.804 [2024-06-11 15:17:36.573115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.804 [2024-06-11 15:17:36.573121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.804 [2024-06-11 15:17:36.573127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.804 [2024-06-11 15:17:36.573142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.804 qpair failed and we were unable to recover it. 00:32:17.804 [2024-06-11 15:17:36.582958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.804 [2024-06-11 15:17:36.583058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.804 [2024-06-11 15:17:36.583073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.804 [2024-06-11 15:17:36.583080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.804 [2024-06-11 15:17:36.583085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.804 [2024-06-11 15:17:36.583100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.804 qpair failed and we were unable to recover it. 00:32:17.804 [2024-06-11 15:17:36.593065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.804 [2024-06-11 15:17:36.593166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.804 [2024-06-11 15:17:36.593182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.804 [2024-06-11 15:17:36.593189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.804 [2024-06-11 15:17:36.593195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.804 [2024-06-11 15:17:36.593209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.804 qpair failed and we were unable to recover it. 00:32:17.804 [2024-06-11 15:17:36.603132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.804 [2024-06-11 15:17:36.603228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.804 [2024-06-11 15:17:36.603242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.804 [2024-06-11 15:17:36.603249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.804 [2024-06-11 15:17:36.603255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.804 [2024-06-11 15:17:36.603269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.804 qpair failed and we were unable to recover it. 00:32:17.804 [2024-06-11 15:17:36.613162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.804 [2024-06-11 15:17:36.613294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.804 [2024-06-11 15:17:36.613312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.804 [2024-06-11 15:17:36.613319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.804 [2024-06-11 15:17:36.613325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.804 [2024-06-11 15:17:36.613340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.804 qpair failed and we were unable to recover it. 00:32:17.804 [2024-06-11 15:17:36.623132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.804 [2024-06-11 15:17:36.623225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.804 [2024-06-11 15:17:36.623240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.804 [2024-06-11 15:17:36.623247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.804 [2024-06-11 15:17:36.623253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.804 [2024-06-11 15:17:36.623268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.804 qpair failed and we were unable to recover it. 00:32:17.804 [2024-06-11 15:17:36.633167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.804 [2024-06-11 15:17:36.633288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.804 [2024-06-11 15:17:36.633303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.804 [2024-06-11 15:17:36.633310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.804 [2024-06-11 15:17:36.633316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:17.804 [2024-06-11 15:17:36.633331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.804 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.643191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.643282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.643297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.643304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.643310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.643324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.653265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.653357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.653373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.653380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.653388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.653402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.663260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.663353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.663368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.663375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.663380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.663395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.673309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.673400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.673416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.673422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.673428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.673442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.683373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.683508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.683523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.683530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.683535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.683550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.693390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.693572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.693597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.693603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.693609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.693625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.703387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.703480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.703495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.703502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.703508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.703522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.713401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.713492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.713507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.713514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.713520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.713534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.723460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.723554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.723571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.723578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.723584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.723598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.733501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.733589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.733605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.733612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.733617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.733631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.743509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.743598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.065 [2024-06-11 15:17:36.743613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.065 [2024-06-11 15:17:36.743620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.065 [2024-06-11 15:17:36.743628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.065 [2024-06-11 15:17:36.743643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.065 qpair failed and we were unable to recover it. 00:32:18.065 [2024-06-11 15:17:36.753466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.065 [2024-06-11 15:17:36.753561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.753577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.753584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.753590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.753603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.763527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.763618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.763633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.763640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.763645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.763660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.773629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.773740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.773757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.773765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.773771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.773786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.783639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.783730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.783747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.783754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.783760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.783775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.793658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.793799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.793815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.793822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.793828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.793842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.803702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.803790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.803805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.803813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.803818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.803832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.813753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.813854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.813870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.813877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.813882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.813897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.823785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.823876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.823891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.823898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.823904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.823918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.833824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.833916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.833931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.834125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.834131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.834148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.843845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.843931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.843946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.843953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.843959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.843973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.853883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.853971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.853986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.853993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.853999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.854014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.863899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.864002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.864018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.864030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.066 [2024-06-11 15:17:36.864036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.066 [2024-06-11 15:17:36.864051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.066 qpair failed and we were unable to recover it. 00:32:18.066 [2024-06-11 15:17:36.873928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.066 [2024-06-11 15:17:36.874027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.066 [2024-06-11 15:17:36.874043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.066 [2024-06-11 15:17:36.874050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.067 [2024-06-11 15:17:36.874056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.067 [2024-06-11 15:17:36.874071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.067 qpair failed and we were unable to recover it. 00:32:18.067 [2024-06-11 15:17:36.883962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.067 [2024-06-11 15:17:36.884058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.067 [2024-06-11 15:17:36.884073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.067 [2024-06-11 15:17:36.884080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.067 [2024-06-11 15:17:36.884086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.067 [2024-06-11 15:17:36.884101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.067 qpair failed and we were unable to recover it. 00:32:18.067 [2024-06-11 15:17:36.894012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.067 [2024-06-11 15:17:36.894109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.067 [2024-06-11 15:17:36.894125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.067 [2024-06-11 15:17:36.894132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.067 [2024-06-11 15:17:36.894137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.067 [2024-06-11 15:17:36.894152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.067 qpair failed and we were unable to recover it. 00:32:18.067 [2024-06-11 15:17:36.904012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.067 [2024-06-11 15:17:36.904113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.067 [2024-06-11 15:17:36.904129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.067 [2024-06-11 15:17:36.904136] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.067 [2024-06-11 15:17:36.904141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.067 [2024-06-11 15:17:36.904156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.067 qpair failed and we were unable to recover it. 00:32:18.327 [2024-06-11 15:17:36.913975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.327 [2024-06-11 15:17:36.914074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.327 [2024-06-11 15:17:36.914089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.327 [2024-06-11 15:17:36.914096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.327 [2024-06-11 15:17:36.914101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.327 [2024-06-11 15:17:36.914116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.327 qpair failed and we were unable to recover it. 00:32:18.327 [2024-06-11 15:17:36.924117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.327 [2024-06-11 15:17:36.924207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.327 [2024-06-11 15:17:36.924222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.327 [2024-06-11 15:17:36.924232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.327 [2024-06-11 15:17:36.924238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.327 [2024-06-11 15:17:36.924253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.327 qpair failed and we were unable to recover it. 00:32:18.327 [2024-06-11 15:17:36.934100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.327 [2024-06-11 15:17:36.934191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.327 [2024-06-11 15:17:36.934207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.327 [2024-06-11 15:17:36.934213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.327 [2024-06-11 15:17:36.934219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.327 [2024-06-11 15:17:36.934233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.327 qpair failed and we were unable to recover it. 00:32:18.327 [2024-06-11 15:17:36.944116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.327 [2024-06-11 15:17:36.944205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.327 [2024-06-11 15:17:36.944220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.327 [2024-06-11 15:17:36.944227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.327 [2024-06-11 15:17:36.944233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.327 [2024-06-11 15:17:36.944247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.327 qpair failed and we were unable to recover it. 00:32:18.327 [2024-06-11 15:17:36.954144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.327 [2024-06-11 15:17:36.954259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.327 [2024-06-11 15:17:36.954274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.327 [2024-06-11 15:17:36.954281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.327 [2024-06-11 15:17:36.954287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.327 [2024-06-11 15:17:36.954302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:36.964126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:36.964219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:36.964234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:36.964241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:36.964246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:36.964261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:36.974238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:36.974365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:36.974380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:36.974387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:36.974392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:36.974407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:36.984262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:36.984375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:36.984390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:36.984397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:36.984403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:36.984417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:36.994212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:36.994304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:36.994319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:36.994326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:36.994332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:36.994346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:37.004330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:37.004425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:37.004440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:37.004447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:37.004452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:37.004466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:37.014356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:37.014448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:37.014466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:37.014473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:37.014479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:37.014493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:37.024360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:37.024454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:37.024469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:37.024476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:37.024482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:37.024496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:37.034342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:37.034466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:37.034481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:37.034488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:37.034494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:37.034508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:37.044410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:37.044496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:37.044511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:37.044518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:37.044523] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:37.044537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:37.054444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:37.054550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:37.054565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:37.054573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:37.054579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:37.054596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:37.064481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:37.064578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:37.064593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:37.064600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:37.064606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.328 [2024-06-11 15:17:37.064620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.328 qpair failed and we were unable to recover it. 00:32:18.328 [2024-06-11 15:17:37.074507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.328 [2024-06-11 15:17:37.074603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.328 [2024-06-11 15:17:37.074619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.328 [2024-06-11 15:17:37.074625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.328 [2024-06-11 15:17:37.074631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.329 [2024-06-11 15:17:37.074645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.329 qpair failed and we were unable to recover it. 00:32:18.329 [2024-06-11 15:17:37.084563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.329 [2024-06-11 15:17:37.084654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.329 [2024-06-11 15:17:37.084669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.329 [2024-06-11 15:17:37.084676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.329 [2024-06-11 15:17:37.084682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.329 [2024-06-11 15:17:37.084696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.329 qpair failed and we were unable to recover it. 00:32:18.329 [2024-06-11 15:17:37.094602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.329 [2024-06-11 15:17:37.094696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.329 [2024-06-11 15:17:37.094711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.329 [2024-06-11 15:17:37.094718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.329 [2024-06-11 15:17:37.094724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.329 [2024-06-11 15:17:37.094738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.329 qpair failed and we were unable to recover it. 00:32:18.329 [2024-06-11 15:17:37.104620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.329 [2024-06-11 15:17:37.104711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.329 [2024-06-11 15:17:37.104730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.329 [2024-06-11 15:17:37.104736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.329 [2024-06-11 15:17:37.104742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.329 [2024-06-11 15:17:37.104756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.329 qpair failed and we were unable to recover it. 00:32:18.329 [2024-06-11 15:17:37.114665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.329 [2024-06-11 15:17:37.114757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.329 [2024-06-11 15:17:37.114774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.329 [2024-06-11 15:17:37.114781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.329 [2024-06-11 15:17:37.114787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.329 [2024-06-11 15:17:37.114801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.329 qpair failed and we were unable to recover it. 00:32:18.329 [2024-06-11 15:17:37.124687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.329 [2024-06-11 15:17:37.124779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.329 [2024-06-11 15:17:37.124795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.329 [2024-06-11 15:17:37.124802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.329 [2024-06-11 15:17:37.124808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.329 [2024-06-11 15:17:37.124822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.329 qpair failed and we were unable to recover it. 00:32:18.329 [2024-06-11 15:17:37.134630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.329 [2024-06-11 15:17:37.134718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.329 [2024-06-11 15:17:37.134733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.329 [2024-06-11 15:17:37.134740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.329 [2024-06-11 15:17:37.134746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.329 [2024-06-11 15:17:37.134759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.329 qpair failed and we were unable to recover it. 00:32:18.329 [2024-06-11 15:17:37.144780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.329 [2024-06-11 15:17:37.144870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.329 [2024-06-11 15:17:37.144885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.329 [2024-06-11 15:17:37.144892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.329 [2024-06-11 15:17:37.144897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.329 [2024-06-11 15:17:37.144914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.329 qpair failed and we were unable to recover it. 00:32:18.329 [2024-06-11 15:17:37.154754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.329 [2024-06-11 15:17:37.154850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.329 [2024-06-11 15:17:37.154866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.329 [2024-06-11 15:17:37.154873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.329 [2024-06-11 15:17:37.154878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.329 [2024-06-11 15:17:37.154892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.329 qpair failed and we were unable to recover it. 00:32:18.329 [2024-06-11 15:17:37.164780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.329 [2024-06-11 15:17:37.164873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.329 [2024-06-11 15:17:37.164889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.329 [2024-06-11 15:17:37.164895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.329 [2024-06-11 15:17:37.164901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.329 [2024-06-11 15:17:37.164915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.329 qpair failed and we were unable to recover it. 00:32:18.590 [2024-06-11 15:17:37.174815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.590 [2024-06-11 15:17:37.174903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.590 [2024-06-11 15:17:37.174919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.590 [2024-06-11 15:17:37.174926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.590 [2024-06-11 15:17:37.174931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.590 [2024-06-11 15:17:37.174946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.590 qpair failed and we were unable to recover it. 00:32:18.590 [2024-06-11 15:17:37.184858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.590 [2024-06-11 15:17:37.184959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.590 [2024-06-11 15:17:37.184974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.590 [2024-06-11 15:17:37.184981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.590 [2024-06-11 15:17:37.184986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.590 [2024-06-11 15:17:37.185000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.590 qpair failed and we were unable to recover it. 00:32:18.590 [2024-06-11 15:17:37.194870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.590 [2024-06-11 15:17:37.194963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.590 [2024-06-11 15:17:37.194981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.590 [2024-06-11 15:17:37.194988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.590 [2024-06-11 15:17:37.194993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.590 [2024-06-11 15:17:37.195008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.590 qpair failed and we were unable to recover it. 00:32:18.590 [2024-06-11 15:17:37.204930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.590 [2024-06-11 15:17:37.205017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.590 [2024-06-11 15:17:37.205039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.590 [2024-06-11 15:17:37.205046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.590 [2024-06-11 15:17:37.205052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.590 [2024-06-11 15:17:37.205067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.590 qpair failed and we were unable to recover it. 00:32:18.590 [2024-06-11 15:17:37.214987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.590 [2024-06-11 15:17:37.215133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.590 [2024-06-11 15:17:37.215148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.590 [2024-06-11 15:17:37.215155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.590 [2024-06-11 15:17:37.215161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.590 [2024-06-11 15:17:37.215175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.590 qpair failed and we were unable to recover it. 00:32:18.590 [2024-06-11 15:17:37.224975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.590 [2024-06-11 15:17:37.225104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.590 [2024-06-11 15:17:37.225120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.590 [2024-06-11 15:17:37.225126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.590 [2024-06-11 15:17:37.225132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.590 [2024-06-11 15:17:37.225147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.590 qpair failed and we were unable to recover it. 00:32:18.590 [2024-06-11 15:17:37.235021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.590 [2024-06-11 15:17:37.235135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.590 [2024-06-11 15:17:37.235150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.590 [2024-06-11 15:17:37.235157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.590 [2024-06-11 15:17:37.235166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.590 [2024-06-11 15:17:37.235180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.590 qpair failed and we were unable to recover it. 00:32:18.590 [2024-06-11 15:17:37.245061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.590 [2024-06-11 15:17:37.245148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.590 [2024-06-11 15:17:37.245163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.590 [2024-06-11 15:17:37.245171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.590 [2024-06-11 15:17:37.245176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.590 [2024-06-11 15:17:37.245191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.590 qpair failed and we were unable to recover it. 00:32:18.590 [2024-06-11 15:17:37.255085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.590 [2024-06-11 15:17:37.255181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.590 [2024-06-11 15:17:37.255196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.590 [2024-06-11 15:17:37.255204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.590 [2024-06-11 15:17:37.255209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.590 [2024-06-11 15:17:37.255224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.590 qpair failed and we were unable to recover it. 00:32:18.590 [2024-06-11 15:17:37.265110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.590 [2024-06-11 15:17:37.265199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.265214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.265221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.265227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.265241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.275159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.275261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.275277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.275284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.275290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.275305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.285183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.285271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.285286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.285293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.285299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.285313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.295124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.295212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.295227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.295234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.295240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.295255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.305256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.305365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.305381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.305388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.305394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.305409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.315262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.315450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.315467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.315474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.315481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.315496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.325344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.325435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.325451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.325457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.325466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.325480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.335353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.335446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.335462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.335469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.335474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.335489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.345348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.345440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.345455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.345462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.345468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.345482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.355402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.355498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.355514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.355520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.355526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.355541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.365451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.365547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.365563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.365570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.365576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.365591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.375468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.375562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.375578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.375585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.375591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.375605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.385482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.385574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.385590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.385597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.385603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.591 [2024-06-11 15:17:37.385617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.591 qpair failed and we were unable to recover it. 00:32:18.591 [2024-06-11 15:17:37.395533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.591 [2024-06-11 15:17:37.395620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.591 [2024-06-11 15:17:37.395635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.591 [2024-06-11 15:17:37.395642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.591 [2024-06-11 15:17:37.395648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.592 [2024-06-11 15:17:37.395662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.592 qpair failed and we were unable to recover it. 00:32:18.592 [2024-06-11 15:17:37.405549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.592 [2024-06-11 15:17:37.405638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.592 [2024-06-11 15:17:37.405653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.592 [2024-06-11 15:17:37.405661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.592 [2024-06-11 15:17:37.405666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.592 [2024-06-11 15:17:37.405680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.592 qpair failed and we were unable to recover it. 00:32:18.592 [2024-06-11 15:17:37.415585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.592 [2024-06-11 15:17:37.415680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.592 [2024-06-11 15:17:37.415695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.592 [2024-06-11 15:17:37.415705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.592 [2024-06-11 15:17:37.415711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.592 [2024-06-11 15:17:37.415725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.592 qpair failed and we were unable to recover it. 00:32:18.592 [2024-06-11 15:17:37.425608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.592 [2024-06-11 15:17:37.425700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.592 [2024-06-11 15:17:37.425715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.592 [2024-06-11 15:17:37.425722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.592 [2024-06-11 15:17:37.425727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.592 [2024-06-11 15:17:37.425741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.592 qpair failed and we were unable to recover it. 00:32:18.852 [2024-06-11 15:17:37.435672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.852 [2024-06-11 15:17:37.435764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.852 [2024-06-11 15:17:37.435779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.852 [2024-06-11 15:17:37.435786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.852 [2024-06-11 15:17:37.435792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.852 [2024-06-11 15:17:37.435806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.852 qpair failed and we were unable to recover it. 00:32:18.852 [2024-06-11 15:17:37.445597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.852 [2024-06-11 15:17:37.445692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.852 [2024-06-11 15:17:37.445708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.852 [2024-06-11 15:17:37.445714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.852 [2024-06-11 15:17:37.445720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.852 [2024-06-11 15:17:37.445734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.852 qpair failed and we were unable to recover it. 00:32:18.852 [2024-06-11 15:17:37.455692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.852 [2024-06-11 15:17:37.455782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.852 [2024-06-11 15:17:37.455798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.852 [2024-06-11 15:17:37.455805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.852 [2024-06-11 15:17:37.455810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.852 [2024-06-11 15:17:37.455825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.852 qpair failed and we were unable to recover it. 00:32:18.852 [2024-06-11 15:17:37.465661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.852 [2024-06-11 15:17:37.465757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.852 [2024-06-11 15:17:37.465772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.852 [2024-06-11 15:17:37.465779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.852 [2024-06-11 15:17:37.465785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.852 [2024-06-11 15:17:37.465799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.852 qpair failed and we were unable to recover it. 00:32:18.852 [2024-06-11 15:17:37.475683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.852 [2024-06-11 15:17:37.475780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.852 [2024-06-11 15:17:37.475796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.852 [2024-06-11 15:17:37.475803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.852 [2024-06-11 15:17:37.475809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.475823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.485783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.485870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.485886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.485893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.485899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.485913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.495797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.495891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.495906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.495913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.495919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.495933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.505837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.505958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.505976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.505983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.505988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.506003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.515932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.516072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.516088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.516095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.516102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.516117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.525831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.525925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.525941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.525948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.525954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.525968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.535944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.536084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.536101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.536108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.536114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.536129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.545962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.546055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.546071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.546078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.546084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.546104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.556093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.556248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.556263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.556270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.556276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.556291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.566031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.566131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.566148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.566155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.566161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.566176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.576109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.576203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.576219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.576226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.576231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.576245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.586130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.586238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.586254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.586261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.586266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.586280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.596081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.596181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.596200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.596207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.596213] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.596228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.606137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.606248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.853 [2024-06-11 15:17:37.606263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.853 [2024-06-11 15:17:37.606270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.853 [2024-06-11 15:17:37.606275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.853 [2024-06-11 15:17:37.606290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.853 qpair failed and we were unable to recover it. 00:32:18.853 [2024-06-11 15:17:37.616206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.853 [2024-06-11 15:17:37.616295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.854 [2024-06-11 15:17:37.616310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.854 [2024-06-11 15:17:37.616317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.854 [2024-06-11 15:17:37.616323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.854 [2024-06-11 15:17:37.616338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.854 qpair failed and we were unable to recover it. 00:32:18.854 [2024-06-11 15:17:37.626206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.854 [2024-06-11 15:17:37.626341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.854 [2024-06-11 15:17:37.626356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.854 [2024-06-11 15:17:37.626364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.854 [2024-06-11 15:17:37.626369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.854 [2024-06-11 15:17:37.626383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.854 qpair failed and we were unable to recover it. 00:32:18.854 [2024-06-11 15:17:37.636234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.854 [2024-06-11 15:17:37.636327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.854 [2024-06-11 15:17:37.636343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.854 [2024-06-11 15:17:37.636350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.854 [2024-06-11 15:17:37.636355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.854 [2024-06-11 15:17:37.636373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.854 qpair failed and we were unable to recover it. 00:32:18.854 [2024-06-11 15:17:37.646215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.854 [2024-06-11 15:17:37.646307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.854 [2024-06-11 15:17:37.646323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.854 [2024-06-11 15:17:37.646330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.854 [2024-06-11 15:17:37.646336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.854 [2024-06-11 15:17:37.646350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.854 qpair failed and we were unable to recover it. 00:32:18.854 [2024-06-11 15:17:37.656242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.854 [2024-06-11 15:17:37.656335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.854 [2024-06-11 15:17:37.656350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.854 [2024-06-11 15:17:37.656357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.854 [2024-06-11 15:17:37.656363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.854 [2024-06-11 15:17:37.656377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.854 qpair failed and we were unable to recover it. 00:32:18.854 [2024-06-11 15:17:37.666259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.854 [2024-06-11 15:17:37.666349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.854 [2024-06-11 15:17:37.666365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.854 [2024-06-11 15:17:37.666371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.854 [2024-06-11 15:17:37.666377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.854 [2024-06-11 15:17:37.666392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.854 qpair failed and we were unable to recover it. 00:32:18.854 [2024-06-11 15:17:37.676381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.854 [2024-06-11 15:17:37.676478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.854 [2024-06-11 15:17:37.676494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.854 [2024-06-11 15:17:37.676501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.854 [2024-06-11 15:17:37.676507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.854 [2024-06-11 15:17:37.676521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.854 qpair failed and we were unable to recover it. 00:32:18.854 [2024-06-11 15:17:37.686367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.854 [2024-06-11 15:17:37.686455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.854 [2024-06-11 15:17:37.686474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.854 [2024-06-11 15:17:37.686481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.854 [2024-06-11 15:17:37.686486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:18.854 [2024-06-11 15:17:37.686501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.854 qpair failed and we were unable to recover it. 00:32:19.115 [2024-06-11 15:17:37.696400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.115 [2024-06-11 15:17:37.696487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.115 [2024-06-11 15:17:37.696502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.115 [2024-06-11 15:17:37.696509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.115 [2024-06-11 15:17:37.696515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.115 [2024-06-11 15:17:37.696528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.115 qpair failed and we were unable to recover it. 00:32:19.115 [2024-06-11 15:17:37.706441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.115 [2024-06-11 15:17:37.706533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.115 [2024-06-11 15:17:37.706548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.115 [2024-06-11 15:17:37.706555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.115 [2024-06-11 15:17:37.706560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.115 [2024-06-11 15:17:37.706575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.115 qpair failed and we were unable to recover it. 00:32:19.115 [2024-06-11 15:17:37.716485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.115 [2024-06-11 15:17:37.716582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.115 [2024-06-11 15:17:37.716598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.115 [2024-06-11 15:17:37.716605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.115 [2024-06-11 15:17:37.716611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.115 [2024-06-11 15:17:37.716625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.115 qpair failed and we were unable to recover it. 00:32:19.115 [2024-06-11 15:17:37.726505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.115 [2024-06-11 15:17:37.726601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.115 [2024-06-11 15:17:37.726617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.115 [2024-06-11 15:17:37.726623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.115 [2024-06-11 15:17:37.726632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.115 [2024-06-11 15:17:37.726646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.115 qpair failed and we were unable to recover it. 00:32:19.115 [2024-06-11 15:17:37.736533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.115 [2024-06-11 15:17:37.736624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.115 [2024-06-11 15:17:37.736639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.115 [2024-06-11 15:17:37.736646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.115 [2024-06-11 15:17:37.736652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.115 [2024-06-11 15:17:37.736666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.115 qpair failed and we were unable to recover it. 00:32:19.115 [2024-06-11 15:17:37.746500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.115 [2024-06-11 15:17:37.746591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.115 [2024-06-11 15:17:37.746607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.115 [2024-06-11 15:17:37.746614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.115 [2024-06-11 15:17:37.746619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.115 [2024-06-11 15:17:37.746634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.115 qpair failed and we were unable to recover it. 00:32:19.115 [2024-06-11 15:17:37.756559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.115 [2024-06-11 15:17:37.756677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.115 [2024-06-11 15:17:37.756693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.115 [2024-06-11 15:17:37.756699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.115 [2024-06-11 15:17:37.756706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.115 [2024-06-11 15:17:37.756720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.115 qpair failed and we were unable to recover it. 00:32:19.115 [2024-06-11 15:17:37.766684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.115 [2024-06-11 15:17:37.766777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.115 [2024-06-11 15:17:37.766793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.115 [2024-06-11 15:17:37.766800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.115 [2024-06-11 15:17:37.766805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.115 [2024-06-11 15:17:37.766820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.115 qpair failed and we were unable to recover it. 00:32:19.115 [2024-06-11 15:17:37.776589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.115 [2024-06-11 15:17:37.776679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.115 [2024-06-11 15:17:37.776695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.115 [2024-06-11 15:17:37.776702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.115 [2024-06-11 15:17:37.776707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.115 [2024-06-11 15:17:37.776722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.115 qpair failed and we were unable to recover it. 00:32:19.115 [2024-06-11 15:17:37.786696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.115 [2024-06-11 15:17:37.786805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.786820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.786827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.786833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.786847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.796786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.796872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.796888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.796895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.796901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.796915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.806668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.806763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.806778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.806785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.806791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.806805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.816746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.816832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.816848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.816855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.816863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.816878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.826733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.826823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.826839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.826845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.826851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.826866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.836869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.836964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.836979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.836987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.836993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.837007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.846854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.846948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.846964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.846971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.846976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.846990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.856813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.856904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.856919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.856926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.856932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.856946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.866895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.866990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.867005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.867013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.867018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.867040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.876909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.877004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.877020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.877051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.877057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.877073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.886957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.887166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.887183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.887190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.887195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.887212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.896961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.116 [2024-06-11 15:17:37.897065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.116 [2024-06-11 15:17:37.897081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.116 [2024-06-11 15:17:37.897087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.116 [2024-06-11 15:17:37.897093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.116 [2024-06-11 15:17:37.897107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.116 qpair failed and we were unable to recover it. 00:32:19.116 [2024-06-11 15:17:37.907017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.117 [2024-06-11 15:17:37.907112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.117 [2024-06-11 15:17:37.907128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.117 [2024-06-11 15:17:37.907138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.117 [2024-06-11 15:17:37.907144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.117 [2024-06-11 15:17:37.907158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.117 qpair failed and we were unable to recover it. 00:32:19.117 [2024-06-11 15:17:37.917104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.117 [2024-06-11 15:17:37.917236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.117 [2024-06-11 15:17:37.917252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.117 [2024-06-11 15:17:37.917259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.117 [2024-06-11 15:17:37.917265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.117 [2024-06-11 15:17:37.917279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.117 qpair failed and we were unable to recover it. 00:32:19.117 [2024-06-11 15:17:37.927028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.117 [2024-06-11 15:17:37.927116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.117 [2024-06-11 15:17:37.927134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.117 [2024-06-11 15:17:37.927140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.117 [2024-06-11 15:17:37.927147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.117 [2024-06-11 15:17:37.927162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.117 qpair failed and we were unable to recover it. 00:32:19.117 [2024-06-11 15:17:37.937102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.117 [2024-06-11 15:17:37.937204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.117 [2024-06-11 15:17:37.937220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.117 [2024-06-11 15:17:37.937226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.117 [2024-06-11 15:17:37.937232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.117 [2024-06-11 15:17:37.937247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.117 qpair failed and we were unable to recover it. 00:32:19.117 [2024-06-11 15:17:37.947131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.117 [2024-06-11 15:17:37.947227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.117 [2024-06-11 15:17:37.947242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.117 [2024-06-11 15:17:37.947249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.117 [2024-06-11 15:17:37.947254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.117 [2024-06-11 15:17:37.947269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.117 qpair failed and we were unable to recover it. 00:32:19.377 [2024-06-11 15:17:37.957168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.377 [2024-06-11 15:17:37.957256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.377 [2024-06-11 15:17:37.957271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.377 [2024-06-11 15:17:37.957279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.377 [2024-06-11 15:17:37.957285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.377 [2024-06-11 15:17:37.957299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-06-11 15:17:37.967202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.377 [2024-06-11 15:17:37.967295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.377 [2024-06-11 15:17:37.967311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.377 [2024-06-11 15:17:37.967319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.377 [2024-06-11 15:17:37.967325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.377 [2024-06-11 15:17:37.967339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-06-11 15:17:37.977235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.377 [2024-06-11 15:17:37.977331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.377 [2024-06-11 15:17:37.977346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.377 [2024-06-11 15:17:37.977354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.377 [2024-06-11 15:17:37.977359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.377 [2024-06-11 15:17:37.977374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-06-11 15:17:37.987249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.377 [2024-06-11 15:17:37.987338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.377 [2024-06-11 15:17:37.987354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.377 [2024-06-11 15:17:37.987361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.377 [2024-06-11 15:17:37.987367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.377 [2024-06-11 15:17:37.987381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-06-11 15:17:37.997280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.377 [2024-06-11 15:17:37.997465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.377 [2024-06-11 15:17:37.997489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.377 [2024-06-11 15:17:37.997499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.377 [2024-06-11 15:17:37.997505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.377 [2024-06-11 15:17:37.997520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-06-11 15:17:38.007326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.377 [2024-06-11 15:17:38.007426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.377 [2024-06-11 15:17:38.007441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.377 [2024-06-11 15:17:38.007448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.377 [2024-06-11 15:17:38.007454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.377 [2024-06-11 15:17:38.007468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-06-11 15:17:38.017369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.377 [2024-06-11 15:17:38.017464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.377 [2024-06-11 15:17:38.017481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.377 [2024-06-11 15:17:38.017489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.377 [2024-06-11 15:17:38.017495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.377 [2024-06-11 15:17:38.017510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-06-11 15:17:38.027385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.377 [2024-06-11 15:17:38.027474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.377 [2024-06-11 15:17:38.027489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.377 [2024-06-11 15:17:38.027497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.377 [2024-06-11 15:17:38.027502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a8000b90 00:32:19.377 [2024-06-11 15:17:38.027517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-06-11 15:17:38.027808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002650 is same with the state(5) to be set 00:32:19.377 [2024-06-11 15:17:38.037476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.377 [2024-06-11 15:17:38.037684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.377 [2024-06-11 15:17:38.037738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.377 [2024-06-11 15:17:38.037762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.377 [2024-06-11 15:17:38.037788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a0000b90 00:32:19.377 [2024-06-11 15:17:38.037832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-06-11 15:17:38.047501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.377 [2024-06-11 15:17:38.047645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.378 [2024-06-11 15:17:38.047672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.378 [2024-06-11 15:17:38.047685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.378 [2024-06-11 15:17:38.047697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0a0000b90 00:32:19.378 [2024-06-11 15:17:38.047725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:19.378 qpair failed and we were unable to recover it. 00:32:19.378 [2024-06-11 15:17:38.057495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.378 [2024-06-11 15:17:38.057697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.378 [2024-06-11 15:17:38.057729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.378 [2024-06-11 15:17:38.057741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.378 [2024-06-11 15:17:38.057752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0b0000b90 00:32:19.378 [2024-06-11 15:17:38.057778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:19.378 qpair failed and we were unable to recover it. 00:32:19.378 [2024-06-11 15:17:38.067548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.378 [2024-06-11 15:17:38.067672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.378 [2024-06-11 15:17:38.067694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.378 [2024-06-11 15:17:38.067705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.378 [2024-06-11 15:17:38.067714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc0b0000b90 00:32:19.378 [2024-06-11 15:17:38.067736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:19.378 qpair failed and we were unable to recover it. 00:32:19.378 [2024-06-11 15:17:38.077624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.378 [2024-06-11 15:17:38.077844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.378 [2024-06-11 15:17:38.077895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.378 [2024-06-11 15:17:38.077918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.378 [2024-06-11 15:17:38.077935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff4b60 00:32:19.378 [2024-06-11 15:17:38.077975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.378 qpair failed and we were unable to recover it. 00:32:19.378 [2024-06-11 15:17:38.087658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.378 [2024-06-11 15:17:38.087837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.378 [2024-06-11 15:17:38.087866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.378 [2024-06-11 15:17:38.087880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.378 [2024-06-11 15:17:38.087891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff4b60 00:32:19.378 [2024-06-11 15:17:38.087917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.378 qpair failed and we were unable to recover it. 00:32:19.378 [2024-06-11 15:17:38.088238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002650 (9): Bad file descriptor 00:32:19.378 Initializing NVMe Controllers 00:32:19.378 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:19.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:19.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:19.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:19.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:19.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:19.378 Initialization complete. Launching workers. 00:32:19.378 Starting thread on core 1 00:32:19.378 Starting thread on core 2 00:32:19.378 Starting thread on core 3 00:32:19.378 Starting thread on core 0 00:32:19.378 15:17:38 -- host/target_disconnect.sh@59 -- # sync 00:32:19.378 00:32:19.378 real 0m11.515s 00:32:19.378 user 0m21.082s 00:32:19.378 sys 0m4.465s 00:32:19.378 15:17:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:19.378 15:17:38 -- common/autotest_common.sh@10 -- # set +x 00:32:19.378 ************************************ 00:32:19.378 END TEST nvmf_target_disconnect_tc2 00:32:19.378 ************************************ 00:32:19.378 15:17:38 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:32:19.378 15:17:38 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:19.378 15:17:38 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:32:19.378 15:17:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:19.378 15:17:38 -- nvmf/common.sh@116 -- # sync 00:32:19.378 15:17:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:19.378 15:17:38 -- nvmf/common.sh@119 -- # set +e 00:32:19.378 15:17:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:19.378 15:17:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:19.378 rmmod nvme_tcp 00:32:19.378 rmmod nvme_fabrics 00:32:19.378 rmmod nvme_keyring 00:32:19.378 15:17:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:19.378 15:17:38 -- nvmf/common.sh@123 -- # set -e 00:32:19.378 15:17:38 -- nvmf/common.sh@124 -- # return 0 00:32:19.378 15:17:38 -- nvmf/common.sh@477 -- # '[' -n 3485913 ']' 00:32:19.638 15:17:38 -- nvmf/common.sh@478 -- # killprocess 3485913 00:32:19.638 15:17:38 -- common/autotest_common.sh@926 -- # '[' -z 3485913 ']' 00:32:19.638 15:17:38 -- common/autotest_common.sh@930 -- # kill -0 3485913 00:32:19.638 15:17:38 -- common/autotest_common.sh@931 -- # uname 00:32:19.638 15:17:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:19.638 15:17:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3485913 00:32:19.638 15:17:38 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:32:19.638 15:17:38 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:32:19.638 15:17:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3485913' 00:32:19.638 killing process with pid 3485913 00:32:19.638 15:17:38 -- common/autotest_common.sh@945 -- # kill 3485913 00:32:19.638 15:17:38 -- common/autotest_common.sh@950 -- # wait 3485913 00:32:19.897 15:17:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:19.897 15:17:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:19.897 15:17:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:19.897 15:17:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:19.897 15:17:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:19.897 15:17:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.897 15:17:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:19.897 15:17:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.851 15:17:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:21.851 00:32:21.851 real 0m20.756s 00:32:21.851 user 0m48.832s 00:32:21.851 sys 0m9.779s 00:32:21.851 15:17:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.851 15:17:40 -- common/autotest_common.sh@10 -- # set +x 00:32:21.851 ************************************ 00:32:21.851 END TEST nvmf_target_disconnect 00:32:21.851 ************************************ 00:32:21.851 15:17:40 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:32:21.851 15:17:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:21.851 15:17:40 -- common/autotest_common.sh@10 -- # set +x 00:32:21.851 15:17:40 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:32:21.851 00:32:21.851 real 24m38.352s 00:32:21.851 user 66m17.615s 00:32:21.851 sys 6m38.614s 00:32:21.851 15:17:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.851 15:17:40 -- common/autotest_common.sh@10 -- # set +x 00:32:21.851 ************************************ 00:32:21.851 END TEST nvmf_tcp 00:32:21.851 ************************************ 00:32:21.851 15:17:40 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:32:21.851 15:17:40 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:21.851 15:17:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:21.851 15:17:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:21.851 15:17:40 -- common/autotest_common.sh@10 -- # set +x 00:32:21.851 ************************************ 00:32:21.851 START TEST spdkcli_nvmf_tcp 00:32:21.851 ************************************ 00:32:21.851 15:17:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:22.111 * Looking for test storage... 00:32:22.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:22.111 15:17:40 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:22.111 15:17:40 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:22.111 15:17:40 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:22.111 15:17:40 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.111 15:17:40 -- nvmf/common.sh@7 -- # uname -s 00:32:22.111 15:17:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.111 15:17:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.111 15:17:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.111 15:17:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.111 15:17:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.111 15:17:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.111 15:17:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.111 15:17:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.111 15:17:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.111 15:17:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.111 15:17:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:22.111 15:17:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:22.111 15:17:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.111 15:17:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.111 15:17:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.111 15:17:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.111 15:17:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.111 15:17:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.111 15:17:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.111 15:17:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.111 15:17:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.111 15:17:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.111 15:17:40 -- paths/export.sh@5 -- # export PATH 00:32:22.111 15:17:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.111 15:17:40 -- nvmf/common.sh@46 -- # : 0 00:32:22.111 15:17:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:22.111 15:17:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:22.111 15:17:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:22.111 15:17:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.111 15:17:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.111 15:17:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:22.111 15:17:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:22.111 15:17:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:22.111 15:17:40 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:22.111 15:17:40 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:22.111 15:17:40 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:22.111 15:17:40 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:22.111 15:17:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:22.111 15:17:40 -- common/autotest_common.sh@10 -- # set +x 00:32:22.111 15:17:40 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:22.111 15:17:40 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3487647 00:32:22.111 15:17:40 -- spdkcli/common.sh@34 -- # waitforlisten 3487647 00:32:22.111 15:17:40 -- common/autotest_common.sh@819 -- # '[' -z 3487647 ']' 00:32:22.111 15:17:40 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:22.111 15:17:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.111 15:17:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:22.111 15:17:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.111 15:17:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:22.111 15:17:40 -- common/autotest_common.sh@10 -- # set +x 00:32:22.112 [2024-06-11 15:17:40.835299] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:22.112 [2024-06-11 15:17:40.835347] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487647 ] 00:32:22.112 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.112 [2024-06-11 15:17:40.910036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:22.371 [2024-06-11 15:17:40.996816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:22.371 [2024-06-11 15:17:40.996998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.371 [2024-06-11 15:17:40.997004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.939 15:17:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:22.939 15:17:41 -- common/autotest_common.sh@852 -- # return 0 00:32:22.939 15:17:41 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:22.939 15:17:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:22.939 15:17:41 -- common/autotest_common.sh@10 -- # set +x 00:32:23.198 15:17:41 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:23.198 15:17:41 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:23.198 15:17:41 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:23.198 15:17:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:23.198 15:17:41 -- common/autotest_common.sh@10 -- # set +x 00:32:23.198 15:17:41 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:23.198 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:23.198 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:23.198 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:23.198 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:23.198 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:23.198 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:23.198 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.198 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.198 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:23.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:23.198 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:23.198 ' 00:32:23.455 [2024-06-11 15:17:42.209452] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:25.990 [2024-06-11 15:17:44.231667] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.928 [2024-06-11 15:17:45.407997] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:28.898 [2024-06-11 15:17:47.571281] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:30.801 [2024-06-11 15:17:49.429542] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:32.177 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:32.177 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:32.177 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:32.177 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:32.177 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:32.177 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:32.177 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:32.177 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:32.177 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:32.177 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:32.177 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:32.178 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:32.178 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:32.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:32.178 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:32.178 15:17:50 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:32.178 15:17:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:32.178 15:17:50 -- common/autotest_common.sh@10 -- # set +x 00:32:32.178 15:17:51 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:32.178 15:17:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:32.178 15:17:51 -- common/autotest_common.sh@10 -- # set +x 00:32:32.178 15:17:51 -- spdkcli/nvmf.sh@69 -- # check_match 00:32:32.178 15:17:51 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:32.746 15:17:51 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:32.746 15:17:51 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:32.746 15:17:51 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:32.746 15:17:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:32.746 15:17:51 -- common/autotest_common.sh@10 -- # set +x 00:32:32.746 15:17:51 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:32.746 15:17:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:32.746 15:17:51 -- common/autotest_common.sh@10 -- # set +x 00:32:32.746 15:17:51 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:32.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:32.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:32.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:32.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:32.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:32.746 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:32.746 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:32.746 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:32.746 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:32.746 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:32.746 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:32.746 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:32.746 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:32.746 ' 00:32:38.017 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:38.017 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:38.017 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:38.017 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:38.017 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:38.017 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:38.017 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:38.017 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:38.017 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:38.017 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:38.017 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:38.017 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:38.017 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:38.017 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:38.276 15:17:56 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:38.276 15:17:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:38.276 15:17:56 -- common/autotest_common.sh@10 -- # set +x 00:32:38.276 15:17:56 -- spdkcli/nvmf.sh@90 -- # killprocess 3487647 00:32:38.276 15:17:56 -- common/autotest_common.sh@926 -- # '[' -z 3487647 ']' 00:32:38.276 15:17:56 -- common/autotest_common.sh@930 -- # kill -0 3487647 00:32:38.276 15:17:56 -- common/autotest_common.sh@931 -- # uname 00:32:38.276 15:17:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:38.276 15:17:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3487647 00:32:38.276 15:17:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:38.276 15:17:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:38.276 15:17:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3487647' 00:32:38.276 killing process with pid 3487647 00:32:38.276 15:17:56 -- common/autotest_common.sh@945 -- # kill 3487647 00:32:38.276 [2024-06-11 15:17:56.944636] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:38.276 15:17:56 -- common/autotest_common.sh@950 -- # wait 3487647 00:32:38.535 15:17:57 -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:38.535 15:17:57 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:38.535 15:17:57 -- spdkcli/common.sh@13 -- # '[' -n 3487647 ']' 00:32:38.535 15:17:57 -- spdkcli/common.sh@14 -- # killprocess 3487647 00:32:38.535 15:17:57 -- common/autotest_common.sh@926 -- # '[' -z 3487647 ']' 00:32:38.535 15:17:57 -- common/autotest_common.sh@930 -- # kill -0 3487647 00:32:38.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3487647) - No such process 00:32:38.535 15:17:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3487647 is not found' 00:32:38.535 Process with pid 3487647 is not found 00:32:38.535 15:17:57 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:38.535 15:17:57 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:38.535 15:17:57 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:38.535 00:32:38.535 real 0m16.483s 00:32:38.535 user 0m34.642s 00:32:38.535 sys 0m0.806s 00:32:38.535 15:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.535 15:17:57 -- common/autotest_common.sh@10 -- # set +x 00:32:38.535 ************************************ 00:32:38.535 END TEST spdkcli_nvmf_tcp 00:32:38.535 ************************************ 00:32:38.535 15:17:57 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:38.535 15:17:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:38.535 15:17:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:38.535 15:17:57 -- common/autotest_common.sh@10 -- # set +x 00:32:38.535 ************************************ 00:32:38.535 START TEST nvmf_identify_passthru 00:32:38.535 ************************************ 00:32:38.535 15:17:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:38.535 * Looking for test storage... 00:32:38.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:38.535 15:17:57 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.535 15:17:57 -- nvmf/common.sh@7 -- # uname -s 00:32:38.535 15:17:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.535 15:17:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.535 15:17:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.535 15:17:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.535 15:17:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.535 15:17:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.535 15:17:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.535 15:17:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.535 15:17:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.535 15:17:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.535 15:17:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:38.535 15:17:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:38.535 15:17:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.535 15:17:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.535 15:17:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.535 15:17:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.535 15:17:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.535 15:17:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.535 15:17:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.535 15:17:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.535 15:17:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.535 15:17:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.535 15:17:57 -- paths/export.sh@5 -- # export PATH 00:32:38.535 15:17:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.535 15:17:57 -- nvmf/common.sh@46 -- # : 0 00:32:38.535 15:17:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:38.535 15:17:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:38.535 15:17:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:38.535 15:17:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.535 15:17:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.535 15:17:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:38.536 15:17:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:38.536 15:17:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:38.536 15:17:57 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.536 15:17:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.536 15:17:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.536 15:17:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.536 15:17:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.536 15:17:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.536 15:17:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.536 15:17:57 -- paths/export.sh@5 -- # export PATH 00:32:38.536 15:17:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.536 15:17:57 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:38.536 15:17:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:38.536 15:17:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.536 15:17:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:38.536 15:17:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:38.536 15:17:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:38.536 15:17:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.536 15:17:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:38.536 15:17:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.536 15:17:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:38.536 15:17:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:38.536 15:17:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:38.536 15:17:57 -- common/autotest_common.sh@10 -- # set +x 00:32:45.103 15:18:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:45.103 15:18:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:45.103 15:18:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:45.103 15:18:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:45.103 15:18:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:45.103 15:18:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:45.103 15:18:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:45.103 15:18:03 -- nvmf/common.sh@294 -- # net_devs=() 00:32:45.103 15:18:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:45.103 15:18:03 -- nvmf/common.sh@295 -- # e810=() 00:32:45.103 15:18:03 -- nvmf/common.sh@295 -- # local -ga e810 00:32:45.103 15:18:03 -- nvmf/common.sh@296 -- # x722=() 00:32:45.103 15:18:03 -- nvmf/common.sh@296 -- # local -ga x722 00:32:45.103 15:18:03 -- nvmf/common.sh@297 -- # mlx=() 00:32:45.103 15:18:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:45.103 15:18:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.103 15:18:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:45.103 15:18:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:45.103 15:18:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:45.103 15:18:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:45.103 15:18:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:45.103 15:18:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:45.104 15:18:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:45.104 15:18:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:45.104 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:45.104 15:18:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:45.104 15:18:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:45.104 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:45.104 15:18:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:45.104 15:18:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:45.104 15:18:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.104 15:18:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:45.104 15:18:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.104 15:18:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:45.104 Found net devices under 0000:af:00.0: cvl_0_0 00:32:45.104 15:18:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.104 15:18:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:45.104 15:18:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.104 15:18:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:45.104 15:18:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.104 15:18:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:45.104 Found net devices under 0000:af:00.1: cvl_0_1 00:32:45.104 15:18:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.104 15:18:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:45.104 15:18:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:45.104 15:18:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:45.104 15:18:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.104 15:18:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.104 15:18:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.104 15:18:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:45.104 15:18:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.104 15:18:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.104 15:18:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:45.104 15:18:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.104 15:18:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.104 15:18:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:45.104 15:18:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:45.104 15:18:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.104 15:18:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.104 15:18:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.104 15:18:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.104 15:18:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:45.104 15:18:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.104 15:18:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.104 15:18:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.104 15:18:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:45.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:32:45.104 00:32:45.104 --- 10.0.0.2 ping statistics --- 00:32:45.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.104 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:32:45.104 15:18:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:32:45.104 00:32:45.104 --- 10.0.0.1 ping statistics --- 00:32:45.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.104 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:32:45.104 15:18:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.104 15:18:03 -- nvmf/common.sh@410 -- # return 0 00:32:45.104 15:18:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:45.104 15:18:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.104 15:18:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:45.104 15:18:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.104 15:18:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:45.104 15:18:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:45.104 15:18:03 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:45.104 15:18:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:45.104 15:18:03 -- common/autotest_common.sh@10 -- # set +x 00:32:45.104 15:18:03 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:45.104 15:18:03 -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:45.104 15:18:03 -- common/autotest_common.sh@1509 -- # local bdfs 00:32:45.104 15:18:03 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:45.104 15:18:03 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:45.104 15:18:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:45.104 15:18:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:45.104 15:18:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:45.104 15:18:03 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:45.104 15:18:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:45.104 15:18:03 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:45.104 15:18:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:32:45.104 15:18:03 -- common/autotest_common.sh@1512 -- # echo 0000:86:00.0 00:32:45.104 15:18:03 -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:32:45.104 15:18:03 -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:32:45.104 15:18:03 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:32:45.104 15:18:03 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:45.104 15:18:03 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:45.364 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.562 15:18:08 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:32:49.562 15:18:08 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:49.563 15:18:08 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:32:49.563 15:18:08 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:49.563 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.758 15:18:12 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:53.758 15:18:12 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:53.758 15:18:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:53.758 15:18:12 -- common/autotest_common.sh@10 -- # set +x 00:32:53.758 15:18:12 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:53.758 15:18:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:53.758 15:18:12 -- common/autotest_common.sh@10 -- # set +x 00:32:53.758 15:18:12 -- target/identify_passthru.sh@31 -- # nvmfpid=3496124 00:32:53.758 15:18:12 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:53.758 15:18:12 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:53.758 15:18:12 -- target/identify_passthru.sh@35 -- # waitforlisten 3496124 00:32:53.758 15:18:12 -- common/autotest_common.sh@819 -- # '[' -z 3496124 ']' 00:32:53.758 15:18:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.758 15:18:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:53.758 15:18:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.758 15:18:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:53.758 15:18:12 -- common/autotest_common.sh@10 -- # set +x 00:32:53.758 [2024-06-11 15:18:12.514887] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:53.758 [2024-06-11 15:18:12.514944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.758 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.018 [2024-06-11 15:18:12.608698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:54.018 [2024-06-11 15:18:12.698112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:54.018 [2024-06-11 15:18:12.698254] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.018 [2024-06-11 15:18:12.698266] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.018 [2024-06-11 15:18:12.698275] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.018 [2024-06-11 15:18:12.698331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.018 [2024-06-11 15:18:12.698432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:54.018 [2024-06-11 15:18:12.698542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:54.018 [2024-06-11 15:18:12.698543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.956 15:18:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:54.956 15:18:13 -- common/autotest_common.sh@852 -- # return 0 00:32:54.956 15:18:13 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:54.956 15:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:54.956 15:18:13 -- common/autotest_common.sh@10 -- # set +x 00:32:54.956 INFO: Log level set to 20 00:32:54.956 INFO: Requests: 00:32:54.956 { 00:32:54.956 "jsonrpc": "2.0", 00:32:54.956 "method": "nvmf_set_config", 00:32:54.956 "id": 1, 00:32:54.956 "params": { 00:32:54.956 "admin_cmd_passthru": { 00:32:54.956 "identify_ctrlr": true 00:32:54.956 } 00:32:54.956 } 00:32:54.956 } 00:32:54.956 00:32:54.956 INFO: response: 00:32:54.956 { 00:32:54.956 "jsonrpc": "2.0", 00:32:54.956 "id": 1, 00:32:54.956 "result": true 00:32:54.956 } 00:32:54.956 00:32:54.956 15:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:54.956 15:18:13 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:54.956 15:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:54.956 15:18:13 -- common/autotest_common.sh@10 -- # set +x 00:32:54.956 INFO: Setting log level to 20 00:32:54.956 INFO: Setting log level to 20 00:32:54.956 INFO: Log level set to 20 00:32:54.956 INFO: Log level set to 20 00:32:54.956 INFO: Requests: 00:32:54.956 { 00:32:54.956 "jsonrpc": "2.0", 00:32:54.956 "method": "framework_start_init", 00:32:54.956 "id": 1 00:32:54.956 } 00:32:54.956 00:32:54.956 INFO: Requests: 00:32:54.956 { 00:32:54.956 "jsonrpc": "2.0", 00:32:54.956 "method": "framework_start_init", 00:32:54.956 "id": 1 00:32:54.956 } 00:32:54.956 00:32:54.956 [2024-06-11 15:18:13.553526] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:54.956 INFO: response: 00:32:54.956 { 00:32:54.956 "jsonrpc": "2.0", 00:32:54.956 "id": 1, 00:32:54.956 "result": true 00:32:54.956 } 00:32:54.956 00:32:54.956 INFO: response: 00:32:54.956 { 00:32:54.956 "jsonrpc": "2.0", 00:32:54.956 "id": 1, 00:32:54.956 "result": true 00:32:54.956 } 00:32:54.956 00:32:54.956 15:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:54.956 15:18:13 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:54.956 15:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:54.956 15:18:13 -- common/autotest_common.sh@10 -- # set +x 00:32:54.956 INFO: Setting log level to 40 00:32:54.956 INFO: Setting log level to 40 00:32:54.956 INFO: Setting log level to 40 00:32:54.956 [2024-06-11 15:18:13.567085] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.956 15:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:54.956 15:18:13 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:54.956 15:18:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:54.956 15:18:13 -- common/autotest_common.sh@10 -- # set +x 00:32:54.956 15:18:13 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:32:54.956 15:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:54.956 15:18:13 -- common/autotest_common.sh@10 -- # set +x 00:32:58.246 Nvme0n1 00:32:58.246 15:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.246 15:18:16 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:58.246 15:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.246 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:32:58.246 15:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.246 15:18:16 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:58.246 15:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.246 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:32:58.246 15:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.246 15:18:16 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.246 15:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.246 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:32:58.246 [2024-06-11 15:18:16.500544] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.246 15:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.246 15:18:16 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:58.246 15:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.246 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:32:58.246 [2024-06-11 15:18:16.508290] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:58.246 [ 00:32:58.246 { 00:32:58.246 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:58.246 "subtype": "Discovery", 00:32:58.246 "listen_addresses": [], 00:32:58.246 "allow_any_host": true, 00:32:58.246 "hosts": [] 00:32:58.246 }, 00:32:58.246 { 00:32:58.246 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.246 "subtype": "NVMe", 00:32:58.246 "listen_addresses": [ 00:32:58.246 { 00:32:58.246 "transport": "TCP", 00:32:58.247 "trtype": "TCP", 00:32:58.247 "adrfam": "IPv4", 00:32:58.247 "traddr": "10.0.0.2", 00:32:58.247 "trsvcid": "4420" 00:32:58.247 } 00:32:58.247 ], 00:32:58.247 "allow_any_host": true, 00:32:58.247 "hosts": [], 00:32:58.247 "serial_number": "SPDK00000000000001", 00:32:58.247 "model_number": "SPDK bdev Controller", 00:32:58.247 "max_namespaces": 1, 00:32:58.247 "min_cntlid": 1, 00:32:58.247 "max_cntlid": 65519, 00:32:58.247 "namespaces": [ 00:32:58.247 { 00:32:58.247 "nsid": 1, 00:32:58.247 "bdev_name": "Nvme0n1", 00:32:58.247 "name": "Nvme0n1", 00:32:58.247 "nguid": "DA462282CB93488FAE7025069FA9DB38", 00:32:58.247 "uuid": "da462282-cb93-488f-ae70-25069fa9db38" 00:32:58.247 } 00:32:58.247 ] 00:32:58.247 } 00:32:58.247 ] 00:32:58.247 15:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.247 15:18:16 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:58.247 15:18:16 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:58.247 15:18:16 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:58.247 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.247 15:18:16 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:32:58.247 15:18:16 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:58.247 15:18:16 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:58.247 15:18:16 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:58.247 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.247 15:18:16 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:58.247 15:18:16 -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:32:58.247 15:18:16 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:58.247 15:18:16 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:58.247 15:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.247 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:32:58.247 15:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.247 15:18:16 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:58.247 15:18:16 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:58.247 15:18:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:58.247 15:18:16 -- nvmf/common.sh@116 -- # sync 00:32:58.247 15:18:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:58.247 15:18:16 -- nvmf/common.sh@119 -- # set +e 00:32:58.247 15:18:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:58.247 15:18:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:58.247 rmmod nvme_tcp 00:32:58.247 rmmod nvme_fabrics 00:32:58.247 rmmod nvme_keyring 00:32:58.247 15:18:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:58.247 15:18:16 -- nvmf/common.sh@123 -- # set -e 00:32:58.247 15:18:16 -- nvmf/common.sh@124 -- # return 0 00:32:58.247 15:18:16 -- nvmf/common.sh@477 -- # '[' -n 3496124 ']' 00:32:58.247 15:18:16 -- nvmf/common.sh@478 -- # killprocess 3496124 00:32:58.247 15:18:16 -- common/autotest_common.sh@926 -- # '[' -z 3496124 ']' 00:32:58.247 15:18:16 -- common/autotest_common.sh@930 -- # kill -0 3496124 00:32:58.247 15:18:16 -- common/autotest_common.sh@931 -- # uname 00:32:58.247 15:18:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:58.247 15:18:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3496124 00:32:58.247 15:18:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:58.247 15:18:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:58.247 15:18:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3496124' 00:32:58.247 killing process with pid 3496124 00:32:58.247 15:18:16 -- common/autotest_common.sh@945 -- # kill 3496124 00:32:58.247 [2024-06-11 15:18:16.936385] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:58.247 15:18:16 -- common/autotest_common.sh@950 -- # wait 3496124 00:33:00.153 15:18:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:00.153 15:18:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:00.153 15:18:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:00.153 15:18:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:00.153 15:18:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:00.153 15:18:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.153 15:18:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:00.153 15:18:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.060 15:18:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:02.060 00:33:02.060 real 0m23.354s 00:33:02.060 user 0m31.038s 00:33:02.060 sys 0m5.793s 00:33:02.060 15:18:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:02.060 15:18:20 -- common/autotest_common.sh@10 -- # set +x 00:33:02.060 ************************************ 00:33:02.060 END TEST nvmf_identify_passthru 00:33:02.060 ************************************ 00:33:02.060 15:18:20 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:02.060 15:18:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:02.060 15:18:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:02.060 15:18:20 -- common/autotest_common.sh@10 -- # set +x 00:33:02.060 ************************************ 00:33:02.060 START TEST nvmf_dif 00:33:02.060 ************************************ 00:33:02.060 15:18:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:02.060 * Looking for test storage... 00:33:02.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.061 15:18:20 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.061 15:18:20 -- nvmf/common.sh@7 -- # uname -s 00:33:02.061 15:18:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.061 15:18:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.061 15:18:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.061 15:18:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.061 15:18:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.061 15:18:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.061 15:18:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.061 15:18:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.061 15:18:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.061 15:18:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.061 15:18:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:02.061 15:18:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:02.061 15:18:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.061 15:18:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.061 15:18:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.061 15:18:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.061 15:18:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.061 15:18:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.061 15:18:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.061 15:18:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.061 15:18:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.061 15:18:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.061 15:18:20 -- paths/export.sh@5 -- # export PATH 00:33:02.061 15:18:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.061 15:18:20 -- nvmf/common.sh@46 -- # : 0 00:33:02.061 15:18:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:02.061 15:18:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:02.061 15:18:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:02.061 15:18:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.061 15:18:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.061 15:18:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:02.061 15:18:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:02.061 15:18:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:02.061 15:18:20 -- target/dif.sh@15 -- # NULL_META=16 00:33:02.061 15:18:20 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:02.061 15:18:20 -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:02.061 15:18:20 -- target/dif.sh@15 -- # NULL_DIF=1 00:33:02.061 15:18:20 -- target/dif.sh@135 -- # nvmftestinit 00:33:02.061 15:18:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:02.061 15:18:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.061 15:18:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:02.061 15:18:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:02.061 15:18:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:02.061 15:18:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.061 15:18:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:02.061 15:18:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.061 15:18:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:02.061 15:18:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:02.061 15:18:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:02.061 15:18:20 -- common/autotest_common.sh@10 -- # set +x 00:33:08.626 15:18:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:08.626 15:18:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:08.626 15:18:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:08.626 15:18:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:08.626 15:18:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:08.626 15:18:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:08.626 15:18:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:08.626 15:18:26 -- nvmf/common.sh@294 -- # net_devs=() 00:33:08.626 15:18:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:08.626 15:18:26 -- nvmf/common.sh@295 -- # e810=() 00:33:08.626 15:18:26 -- nvmf/common.sh@295 -- # local -ga e810 00:33:08.626 15:18:26 -- nvmf/common.sh@296 -- # x722=() 00:33:08.626 15:18:26 -- nvmf/common.sh@296 -- # local -ga x722 00:33:08.626 15:18:26 -- nvmf/common.sh@297 -- # mlx=() 00:33:08.626 15:18:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:08.626 15:18:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.626 15:18:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:08.626 15:18:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:08.626 15:18:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:08.626 15:18:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:08.626 15:18:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:08.626 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:08.626 15:18:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:08.626 15:18:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:08.626 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:08.626 15:18:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:08.626 15:18:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:08.626 15:18:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.626 15:18:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:08.626 15:18:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.626 15:18:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:08.626 Found net devices under 0000:af:00.0: cvl_0_0 00:33:08.626 15:18:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.626 15:18:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:08.626 15:18:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.626 15:18:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:08.626 15:18:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.626 15:18:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:08.626 Found net devices under 0000:af:00.1: cvl_0_1 00:33:08.626 15:18:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.626 15:18:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:08.626 15:18:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:08.626 15:18:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:08.626 15:18:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:08.626 15:18:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.626 15:18:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.626 15:18:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.626 15:18:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:08.626 15:18:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.626 15:18:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.626 15:18:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:08.626 15:18:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.626 15:18:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.626 15:18:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:08.626 15:18:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:08.626 15:18:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.626 15:18:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.626 15:18:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.626 15:18:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.626 15:18:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:08.627 15:18:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.627 15:18:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.627 15:18:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.627 15:18:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:08.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:33:08.627 00:33:08.627 --- 10.0.0.2 ping statistics --- 00:33:08.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.627 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:33:08.627 15:18:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:33:08.627 00:33:08.627 --- 10.0.0.1 ping statistics --- 00:33:08.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.627 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:33:08.627 15:18:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.627 15:18:26 -- nvmf/common.sh@410 -- # return 0 00:33:08.627 15:18:26 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:08.627 15:18:26 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:11.161 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:11.161 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:11.161 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:11.161 15:18:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.161 15:18:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:11.161 15:18:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:11.161 15:18:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.161 15:18:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:11.161 15:18:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:11.161 15:18:29 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:11.161 15:18:29 -- target/dif.sh@137 -- # nvmfappstart 00:33:11.161 15:18:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:11.161 15:18:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:11.161 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:33:11.161 15:18:29 -- nvmf/common.sh@469 -- # nvmfpid=3502624 00:33:11.161 15:18:29 -- nvmf/common.sh@470 -- # waitforlisten 3502624 00:33:11.161 15:18:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:11.161 15:18:29 -- common/autotest_common.sh@819 -- # '[' -z 3502624 ']' 00:33:11.161 15:18:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.161 15:18:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:11.161 15:18:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.161 15:18:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:11.161 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:33:11.161 [2024-06-11 15:18:29.868654] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:11.161 [2024-06-11 15:18:29.868707] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.161 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.161 [2024-06-11 15:18:29.961588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.420 [2024-06-11 15:18:30.057137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:11.420 [2024-06-11 15:18:30.057276] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.420 [2024-06-11 15:18:30.057287] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.420 [2024-06-11 15:18:30.057301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.420 [2024-06-11 15:18:30.057329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.988 15:18:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:11.988 15:18:30 -- common/autotest_common.sh@852 -- # return 0 00:33:11.988 15:18:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:11.988 15:18:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:11.988 15:18:30 -- common/autotest_common.sh@10 -- # set +x 00:33:11.988 15:18:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.249 15:18:30 -- target/dif.sh@139 -- # create_transport 00:33:12.249 15:18:30 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:12.249 15:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:12.249 15:18:30 -- common/autotest_common.sh@10 -- # set +x 00:33:12.249 [2024-06-11 15:18:30.837123] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.249 15:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:12.249 15:18:30 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:12.249 15:18:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:12.249 15:18:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:12.249 15:18:30 -- common/autotest_common.sh@10 -- # set +x 00:33:12.249 ************************************ 00:33:12.249 START TEST fio_dif_1_default 00:33:12.249 ************************************ 00:33:12.249 15:18:30 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:33:12.249 15:18:30 -- target/dif.sh@86 -- # create_subsystems 0 00:33:12.249 15:18:30 -- target/dif.sh@28 -- # local sub 00:33:12.249 15:18:30 -- target/dif.sh@30 -- # for sub in "$@" 00:33:12.249 15:18:30 -- target/dif.sh@31 -- # create_subsystem 0 00:33:12.249 15:18:30 -- target/dif.sh@18 -- # local sub_id=0 00:33:12.249 15:18:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:12.249 15:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:12.249 15:18:30 -- common/autotest_common.sh@10 -- # set +x 00:33:12.249 bdev_null0 00:33:12.249 15:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:12.249 15:18:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:12.249 15:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:12.249 15:18:30 -- common/autotest_common.sh@10 -- # set +x 00:33:12.249 15:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:12.249 15:18:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:12.249 15:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:12.249 15:18:30 -- common/autotest_common.sh@10 -- # set +x 00:33:12.249 15:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:12.249 15:18:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:12.249 15:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:12.249 15:18:30 -- common/autotest_common.sh@10 -- # set +x 00:33:12.249 [2024-06-11 15:18:30.881361] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.249 15:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:12.249 15:18:30 -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:12.249 15:18:30 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:12.249 15:18:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:12.249 15:18:30 -- nvmf/common.sh@520 -- # config=() 00:33:12.249 15:18:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:12.249 15:18:30 -- nvmf/common.sh@520 -- # local subsystem config 00:33:12.249 15:18:30 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:12.249 15:18:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:12.249 15:18:30 -- target/dif.sh@82 -- # gen_fio_conf 00:33:12.249 15:18:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:12.249 15:18:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:12.249 { 00:33:12.249 "params": { 00:33:12.249 "name": "Nvme$subsystem", 00:33:12.249 "trtype": "$TEST_TRANSPORT", 00:33:12.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:12.249 "adrfam": "ipv4", 00:33:12.249 "trsvcid": "$NVMF_PORT", 00:33:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:12.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:12.249 "hdgst": ${hdgst:-false}, 00:33:12.249 "ddgst": ${ddgst:-false} 00:33:12.249 }, 00:33:12.249 "method": "bdev_nvme_attach_controller" 00:33:12.249 } 00:33:12.249 EOF 00:33:12.249 )") 00:33:12.249 15:18:30 -- target/dif.sh@54 -- # local file 00:33:12.249 15:18:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:12.249 15:18:30 -- target/dif.sh@56 -- # cat 00:33:12.249 15:18:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:12.249 15:18:30 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.249 15:18:30 -- common/autotest_common.sh@1320 -- # shift 00:33:12.249 15:18:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:12.249 15:18:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.249 15:18:30 -- nvmf/common.sh@542 -- # cat 00:33:12.249 15:18:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:12.249 15:18:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.249 15:18:30 -- target/dif.sh@72 -- # (( file <= files )) 00:33:12.249 15:18:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:12.249 15:18:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:12.249 15:18:30 -- nvmf/common.sh@544 -- # jq . 00:33:12.249 15:18:30 -- nvmf/common.sh@545 -- # IFS=, 00:33:12.249 15:18:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:12.249 "params": { 00:33:12.249 "name": "Nvme0", 00:33:12.249 "trtype": "tcp", 00:33:12.249 "traddr": "10.0.0.2", 00:33:12.249 "adrfam": "ipv4", 00:33:12.249 "trsvcid": "4420", 00:33:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:12.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:12.249 "hdgst": false, 00:33:12.249 "ddgst": false 00:33:12.249 }, 00:33:12.249 "method": "bdev_nvme_attach_controller" 00:33:12.249 }' 00:33:12.249 15:18:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:12.249 15:18:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:12.249 15:18:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.249 15:18:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.249 15:18:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:12.249 15:18:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:12.249 15:18:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:12.249 15:18:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:12.249 15:18:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:12.250 15:18:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:12.509 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:12.509 fio-3.35 00:33:12.509 Starting 1 thread 00:33:12.509 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.077 [2024-06-11 15:18:31.758546] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:13.077 [2024-06-11 15:18:31.758589] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:23.110 00:33:23.110 filename0: (groupid=0, jobs=1): err= 0: pid=3503056: Tue Jun 11 15:18:41 2024 00:33:23.110 read: IOPS=184, BW=740KiB/s (757kB/s)(7424KiB/10038msec) 00:33:23.110 slat (nsec): min=9012, max=24972, avg=9327.85, stdev=725.81 00:33:23.110 clat (usec): min=1041, max=47341, avg=21607.03, stdev=20384.43 00:33:23.110 lat (usec): min=1051, max=47365, avg=21616.36, stdev=20384.40 00:33:23.110 clat percentiles (usec): 00:33:23.110 | 1.00th=[ 1057], 5.00th=[ 1074], 10.00th=[ 1090], 20.00th=[ 1139], 00:33:23.110 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[41681], 60.00th=[41681], 00:33:23.110 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:33:23.110 | 99.00th=[42730], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:33:23.110 | 99.99th=[47449] 00:33:23.110 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=740.80, stdev=33.28, samples=20 00:33:23.110 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:33:23.110 lat (msec) : 2=49.78%, 50=50.22% 00:33:23.110 cpu : usr=94.58%, sys=5.11%, ctx=12, majf=0, minf=223 00:33:23.110 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.110 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.110 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:23.110 00:33:23.110 Run status group 0 (all jobs): 00:33:23.110 READ: bw=740KiB/s (757kB/s), 740KiB/s-740KiB/s (757kB/s-757kB/s), io=7424KiB (7602kB), run=10038-10038msec 00:33:23.369 15:18:42 -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:23.370 15:18:42 -- target/dif.sh@43 -- # local sub 00:33:23.370 15:18:42 -- target/dif.sh@45 -- # for sub in "$@" 00:33:23.370 15:18:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:23.370 15:18:42 -- target/dif.sh@36 -- # local sub_id=0 00:33:23.370 15:18:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:23.370 15:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 15:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.370 15:18:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:23.370 15:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 15:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.370 00:33:23.370 real 0m11.249s 00:33:23.370 user 0m20.353s 00:33:23.370 sys 0m0.831s 00:33:23.370 15:18:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 ************************************ 00:33:23.370 END TEST fio_dif_1_default 00:33:23.370 ************************************ 00:33:23.370 15:18:42 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:23.370 15:18:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:23.370 15:18:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 ************************************ 00:33:23.370 START TEST fio_dif_1_multi_subsystems 00:33:23.370 ************************************ 00:33:23.370 15:18:42 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:33:23.370 15:18:42 -- target/dif.sh@92 -- # local files=1 00:33:23.370 15:18:42 -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:23.370 15:18:42 -- target/dif.sh@28 -- # local sub 00:33:23.370 15:18:42 -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.370 15:18:42 -- target/dif.sh@31 -- # create_subsystem 0 00:33:23.370 15:18:42 -- target/dif.sh@18 -- # local sub_id=0 00:33:23.370 15:18:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:23.370 15:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 bdev_null0 00:33:23.370 15:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.370 15:18:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:23.370 15:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 15:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.370 15:18:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:23.370 15:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 15:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.370 15:18:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:23.370 15:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 [2024-06-11 15:18:42.173680] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.370 15:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.370 15:18:42 -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.370 15:18:42 -- target/dif.sh@31 -- # create_subsystem 1 00:33:23.370 15:18:42 -- target/dif.sh@18 -- # local sub_id=1 00:33:23.370 15:18:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:23.370 15:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 bdev_null1 00:33:23.370 15:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.370 15:18:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:23.370 15:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 15:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.370 15:18:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:23.370 15:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 15:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.370 15:18:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.370 15:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.370 15:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.370 15:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.629 15:18:42 -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:23.629 15:18:42 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:23.629 15:18:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:23.629 15:18:42 -- nvmf/common.sh@520 -- # config=() 00:33:23.629 15:18:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.629 15:18:42 -- nvmf/common.sh@520 -- # local subsystem config 00:33:23.629 15:18:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:23.629 15:18:42 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.629 15:18:42 -- target/dif.sh@82 -- # gen_fio_conf 00:33:23.629 15:18:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:23.629 { 00:33:23.629 "params": { 00:33:23.629 "name": "Nvme$subsystem", 00:33:23.629 "trtype": "$TEST_TRANSPORT", 00:33:23.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.629 "adrfam": "ipv4", 00:33:23.629 "trsvcid": "$NVMF_PORT", 00:33:23.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.629 "hdgst": ${hdgst:-false}, 00:33:23.629 "ddgst": ${ddgst:-false} 00:33:23.629 }, 00:33:23.629 "method": "bdev_nvme_attach_controller" 00:33:23.629 } 00:33:23.629 EOF 00:33:23.629 )") 00:33:23.629 15:18:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:23.629 15:18:42 -- target/dif.sh@54 -- # local file 00:33:23.629 15:18:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.629 15:18:42 -- target/dif.sh@56 -- # cat 00:33:23.629 15:18:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:23.629 15:18:42 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.629 15:18:42 -- common/autotest_common.sh@1320 -- # shift 00:33:23.629 15:18:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:23.629 15:18:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.629 15:18:42 -- nvmf/common.sh@542 -- # cat 00:33:23.629 15:18:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:23.629 15:18:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.629 15:18:42 -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.629 15:18:42 -- target/dif.sh@73 -- # cat 00:33:23.629 15:18:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:23.629 15:18:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:23.629 15:18:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:23.629 15:18:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:23.629 { 00:33:23.629 "params": { 00:33:23.629 "name": "Nvme$subsystem", 00:33:23.629 "trtype": "$TEST_TRANSPORT", 00:33:23.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.629 "adrfam": "ipv4", 00:33:23.629 "trsvcid": "$NVMF_PORT", 00:33:23.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.630 "hdgst": ${hdgst:-false}, 00:33:23.630 "ddgst": ${ddgst:-false} 00:33:23.630 }, 00:33:23.630 "method": "bdev_nvme_attach_controller" 00:33:23.630 } 00:33:23.630 EOF 00:33:23.630 )") 00:33:23.630 15:18:42 -- target/dif.sh@72 -- # (( file++ )) 00:33:23.630 15:18:42 -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.630 15:18:42 -- nvmf/common.sh@542 -- # cat 00:33:23.630 15:18:42 -- nvmf/common.sh@544 -- # jq . 00:33:23.630 15:18:42 -- nvmf/common.sh@545 -- # IFS=, 00:33:23.630 15:18:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:23.630 "params": { 00:33:23.630 "name": "Nvme0", 00:33:23.630 "trtype": "tcp", 00:33:23.630 "traddr": "10.0.0.2", 00:33:23.630 "adrfam": "ipv4", 00:33:23.630 "trsvcid": "4420", 00:33:23.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:23.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:23.630 "hdgst": false, 00:33:23.630 "ddgst": false 00:33:23.630 }, 00:33:23.630 "method": "bdev_nvme_attach_controller" 00:33:23.630 },{ 00:33:23.630 "params": { 00:33:23.630 "name": "Nvme1", 00:33:23.630 "trtype": "tcp", 00:33:23.630 "traddr": "10.0.0.2", 00:33:23.630 "adrfam": "ipv4", 00:33:23.630 "trsvcid": "4420", 00:33:23.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:23.630 "hdgst": false, 00:33:23.630 "ddgst": false 00:33:23.630 }, 00:33:23.630 "method": "bdev_nvme_attach_controller" 00:33:23.630 }' 00:33:23.630 15:18:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:23.630 15:18:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:23.630 15:18:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.630 15:18:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.630 15:18:42 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:23.630 15:18:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:23.630 15:18:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:23.630 15:18:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:23.630 15:18:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:23.630 15:18:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.905 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.905 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.905 fio-3.35 00:33:23.905 Starting 2 threads 00:33:23.905 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.473 [2024-06-11 15:18:43.225432] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:24.473 [2024-06-11 15:18:43.225490] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:36.682 00:33:36.682 filename0: (groupid=0, jobs=1): err= 0: pid=3505210: Tue Jun 11 15:18:53 2024 00:33:36.682 read: IOPS=94, BW=380KiB/s (389kB/s)(3808KiB/10027msec) 00:33:36.682 slat (nsec): min=9156, max=39689, avg=11210.85, stdev=3150.60 00:33:36.682 clat (usec): min=41803, max=43107, avg=42094.26, stdev=335.69 00:33:36.682 lat (usec): min=41812, max=43122, avg=42105.47, stdev=335.96 00:33:36.682 clat percentiles (usec): 00:33:36.682 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:33:36.682 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:36.682 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:33:36.682 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:33:36.682 | 99.99th=[43254] 00:33:36.682 bw ( KiB/s): min= 352, max= 384, per=33.88%, avg=379.20, stdev=11.72, samples=20 00:33:36.682 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:33:36.682 lat (msec) : 50=100.00% 00:33:36.682 cpu : usr=97.44%, sys=2.24%, ctx=10, majf=0, minf=176 00:33:36.682 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:36.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.682 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.682 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:36.682 filename1: (groupid=0, jobs=1): err= 0: pid=3505211: Tue Jun 11 15:18:53 2024 00:33:36.682 read: IOPS=184, BW=739KiB/s (757kB/s)(7424KiB/10042msec) 00:33:36.682 slat (nsec): min=9119, max=26597, avg=10393.63, stdev=2325.98 00:33:36.682 clat (usec): min=742, max=42948, avg=21611.27, stdev=20442.91 00:33:36.682 lat (usec): min=752, max=42958, avg=21621.66, stdev=20442.16 00:33:36.682 clat percentiles (usec): 00:33:36.682 | 1.00th=[ 1045], 5.00th=[ 1057], 10.00th=[ 1057], 20.00th=[ 1074], 00:33:36.682 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[41157], 60.00th=[41681], 00:33:36.682 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:33:36.682 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:36.682 | 99.99th=[42730] 00:33:36.682 bw ( KiB/s): min= 672, max= 768, per=66.16%, avg=740.80, stdev=34.86, samples=20 00:33:36.682 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:33:36.682 lat (usec) : 750=0.05%, 1000=0.16% 00:33:36.682 lat (msec) : 2=49.57%, 50=50.22% 00:33:36.682 cpu : usr=97.59%, sys=2.09%, ctx=13, majf=0, minf=130 00:33:36.682 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:36.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.682 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.682 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:36.682 00:33:36.682 Run status group 0 (all jobs): 00:33:36.682 READ: bw=1119KiB/s (1145kB/s), 380KiB/s-739KiB/s (389kB/s-757kB/s), io=11.0MiB (11.5MB), run=10027-10042msec 00:33:36.682 15:18:53 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:36.682 15:18:53 -- target/dif.sh@43 -- # local sub 00:33:36.682 15:18:53 -- target/dif.sh@45 -- # for sub in "$@" 00:33:36.682 15:18:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:36.682 15:18:53 -- target/dif.sh@36 -- # local sub_id=0 00:33:36.682 15:18:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:36.682 15:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.682 15:18:53 -- common/autotest_common.sh@10 -- # set +x 00:33:36.682 15:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.682 15:18:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:36.682 15:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.682 15:18:53 -- common/autotest_common.sh@10 -- # set +x 00:33:36.682 15:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.682 15:18:53 -- target/dif.sh@45 -- # for sub in "$@" 00:33:36.682 15:18:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:36.682 15:18:53 -- target/dif.sh@36 -- # local sub_id=1 00:33:36.682 15:18:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:36.682 15:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.682 15:18:53 -- common/autotest_common.sh@10 -- # set +x 00:33:36.682 15:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.682 15:18:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:36.682 15:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.682 15:18:53 -- common/autotest_common.sh@10 -- # set +x 00:33:36.682 15:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.682 00:33:36.682 real 0m11.545s 00:33:36.682 user 0m32.404s 00:33:36.682 sys 0m0.774s 00:33:36.682 15:18:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:36.682 15:18:53 -- common/autotest_common.sh@10 -- # set +x 00:33:36.682 ************************************ 00:33:36.682 END TEST fio_dif_1_multi_subsystems 00:33:36.682 ************************************ 00:33:36.682 15:18:53 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:36.682 15:18:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:36.682 15:18:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:36.682 15:18:53 -- common/autotest_common.sh@10 -- # set +x 00:33:36.682 ************************************ 00:33:36.682 START TEST fio_dif_rand_params 00:33:36.682 ************************************ 00:33:36.682 15:18:53 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:33:36.683 15:18:53 -- target/dif.sh@100 -- # local NULL_DIF 00:33:36.683 15:18:53 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:36.683 15:18:53 -- target/dif.sh@103 -- # NULL_DIF=3 00:33:36.683 15:18:53 -- target/dif.sh@103 -- # bs=128k 00:33:36.683 15:18:53 -- target/dif.sh@103 -- # numjobs=3 00:33:36.683 15:18:53 -- target/dif.sh@103 -- # iodepth=3 00:33:36.683 15:18:53 -- target/dif.sh@103 -- # runtime=5 00:33:36.683 15:18:53 -- target/dif.sh@105 -- # create_subsystems 0 00:33:36.683 15:18:53 -- target/dif.sh@28 -- # local sub 00:33:36.683 15:18:53 -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.683 15:18:53 -- target/dif.sh@31 -- # create_subsystem 0 00:33:36.683 15:18:53 -- target/dif.sh@18 -- # local sub_id=0 00:33:36.683 15:18:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:36.683 15:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.683 15:18:53 -- common/autotest_common.sh@10 -- # set +x 00:33:36.683 bdev_null0 00:33:36.683 15:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.683 15:18:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:36.683 15:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.683 15:18:53 -- common/autotest_common.sh@10 -- # set +x 00:33:36.683 15:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.683 15:18:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:36.683 15:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.683 15:18:53 -- common/autotest_common.sh@10 -- # set +x 00:33:36.683 15:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.683 15:18:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:36.683 15:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.683 15:18:53 -- common/autotest_common.sh@10 -- # set +x 00:33:36.683 [2024-06-11 15:18:53.758038] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.683 15:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.683 15:18:53 -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:36.683 15:18:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.683 15:18:53 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:36.683 15:18:53 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.683 15:18:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:36.683 15:18:53 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:36.683 15:18:53 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:36.683 15:18:53 -- nvmf/common.sh@520 -- # config=() 00:33:36.683 15:18:53 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:36.683 15:18:53 -- target/dif.sh@82 -- # gen_fio_conf 00:33:36.683 15:18:53 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.683 15:18:53 -- nvmf/common.sh@520 -- # local subsystem config 00:33:36.683 15:18:53 -- common/autotest_common.sh@1320 -- # shift 00:33:36.683 15:18:53 -- target/dif.sh@54 -- # local file 00:33:36.683 15:18:53 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:36.683 15:18:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:36.683 15:18:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.683 15:18:53 -- target/dif.sh@56 -- # cat 00:33:36.683 15:18:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:36.683 { 00:33:36.683 "params": { 00:33:36.683 "name": "Nvme$subsystem", 00:33:36.683 "trtype": "$TEST_TRANSPORT", 00:33:36.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.683 "adrfam": "ipv4", 00:33:36.683 "trsvcid": "$NVMF_PORT", 00:33:36.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.683 "hdgst": ${hdgst:-false}, 00:33:36.683 "ddgst": ${ddgst:-false} 00:33:36.683 }, 00:33:36.683 "method": "bdev_nvme_attach_controller" 00:33:36.683 } 00:33:36.683 EOF 00:33:36.683 )") 00:33:36.683 15:18:53 -- nvmf/common.sh@542 -- # cat 00:33:36.683 15:18:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.683 15:18:53 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:36.683 15:18:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:36.683 15:18:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:36.683 15:18:53 -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.683 15:18:53 -- nvmf/common.sh@544 -- # jq . 00:33:36.683 15:18:53 -- nvmf/common.sh@545 -- # IFS=, 00:33:36.683 15:18:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:36.683 "params": { 00:33:36.683 "name": "Nvme0", 00:33:36.683 "trtype": "tcp", 00:33:36.683 "traddr": "10.0.0.2", 00:33:36.683 "adrfam": "ipv4", 00:33:36.683 "trsvcid": "4420", 00:33:36.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:36.683 "hdgst": false, 00:33:36.683 "ddgst": false 00:33:36.683 }, 00:33:36.683 "method": "bdev_nvme_attach_controller" 00:33:36.683 }' 00:33:36.683 15:18:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:36.683 15:18:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:36.683 15:18:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.683 15:18:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.683 15:18:53 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:36.683 15:18:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:36.683 15:18:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:36.683 15:18:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:36.683 15:18:53 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:36.683 15:18:53 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.683 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:36.683 ... 00:33:36.683 fio-3.35 00:33:36.683 Starting 3 threads 00:33:36.683 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.683 [2024-06-11 15:18:54.533064] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:36.683 [2024-06-11 15:18:54.533111] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:40.876 00:33:40.876 filename0: (groupid=0, jobs=1): err= 0: pid=3507334: Tue Jun 11 15:18:59 2024 00:33:40.876 read: IOPS=192, BW=24.1MiB/s (25.3MB/s)(121MiB/5026msec) 00:33:40.876 slat (nsec): min=9209, max=86149, avg=13200.79, stdev=3766.19 00:33:40.876 clat (usec): min=5808, max=95041, avg=15522.08, stdev=13879.24 00:33:40.876 lat (usec): min=5818, max=95058, avg=15535.28, stdev=13879.33 00:33:40.876 clat percentiles (usec): 00:33:40.876 | 1.00th=[ 6128], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 9110], 00:33:40.876 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[11076], 60.00th=[11863], 00:33:40.876 | 70.00th=[12649], 80.00th=[13566], 90.00th=[50594], 95.00th=[53740], 00:33:40.876 | 99.00th=[56361], 99.50th=[56886], 99.90th=[94897], 99.95th=[94897], 00:33:40.876 | 99.99th=[94897] 00:33:40.876 bw ( KiB/s): min=17920, max=36352, per=35.42%, avg=24755.20, stdev=6152.35, samples=10 00:33:40.876 iops : min= 140, max= 284, avg=193.40, stdev=48.07, samples=10 00:33:40.876 lat (msec) : 10=33.92%, 20=54.74%, 50=0.52%, 100=10.82% 00:33:40.876 cpu : usr=95.74%, sys=3.84%, ctx=32, majf=0, minf=112 00:33:40.876 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.876 issued rwts: total=970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.876 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.876 filename0: (groupid=0, jobs=1): err= 0: pid=3507335: Tue Jun 11 15:18:59 2024 00:33:40.876 read: IOPS=144, BW=18.1MiB/s (19.0MB/s)(90.6MiB/5002msec) 00:33:40.876 slat (nsec): min=9301, max=28574, avg=13253.69, stdev=2881.23 00:33:40.876 clat (usec): min=5744, max=96033, avg=20678.69, stdev=16956.67 00:33:40.876 lat (usec): min=5754, max=96043, avg=20691.95, stdev=16956.79 00:33:40.876 clat percentiles (usec): 00:33:40.876 | 1.00th=[ 6783], 5.00th=[ 7701], 10.00th=[ 8848], 20.00th=[10159], 00:33:40.876 | 30.00th=[11207], 40.00th=[13042], 50.00th=[14091], 60.00th=[15401], 00:33:40.876 | 70.00th=[16712], 80.00th=[19792], 90.00th=[55313], 95.00th=[56886], 00:33:40.876 | 99.00th=[59507], 99.50th=[60556], 99.90th=[95945], 99.95th=[95945], 00:33:40.876 | 99.99th=[95945] 00:33:40.876 bw ( KiB/s): min=10773, max=28928, per=25.73%, avg=17979.22, stdev=5130.33, samples=9 00:33:40.876 iops : min= 84, max= 226, avg=140.44, stdev=40.11, samples=9 00:33:40.876 lat (msec) : 10=17.93%, 20=62.07%, 50=2.07%, 100=17.93% 00:33:40.876 cpu : usr=96.38%, sys=3.26%, ctx=10, majf=0, minf=92 00:33:40.876 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.876 issued rwts: total=725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.876 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.876 filename0: (groupid=0, jobs=1): err= 0: pid=3507336: Tue Jun 11 15:18:59 2024 00:33:40.876 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(131MiB/5002msec) 00:33:40.876 slat (nsec): min=9164, max=25238, avg=12931.66, stdev=3024.04 00:33:40.876 clat (usec): min=5711, max=97934, avg=14286.24, stdev=13022.30 00:33:40.876 lat (usec): min=5722, max=97950, avg=14299.17, stdev=13022.44 00:33:40.876 clat percentiles (usec): 00:33:40.876 | 1.00th=[ 6194], 5.00th=[ 6849], 10.00th=[ 7373], 20.00th=[ 8225], 00:33:40.876 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11207], 00:33:40.876 | 70.00th=[12387], 80.00th=[13566], 90.00th=[16057], 95.00th=[53216], 00:33:40.876 | 99.00th=[56361], 99.50th=[56886], 99.90th=[93848], 99.95th=[98042], 00:33:40.876 | 99.99th=[98042] 00:33:40.876 bw ( KiB/s): min=17664, max=36096, per=38.83%, avg=27136.00, stdev=6458.61, samples=9 00:33:40.876 iops : min= 138, max= 282, avg=212.00, stdev=50.46, samples=9 00:33:40.876 lat (msec) : 10=44.04%, 20=47.09%, 50=0.19%, 100=8.67% 00:33:40.876 cpu : usr=95.32%, sys=4.26%, ctx=10, majf=0, minf=162 00:33:40.876 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.876 issued rwts: total=1049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.876 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.876 00:33:40.876 Run status group 0 (all jobs): 00:33:40.876 READ: bw=68.2MiB/s (71.6MB/s), 18.1MiB/s-26.2MiB/s (19.0MB/s-27.5MB/s), io=343MiB (360MB), run=5002-5026msec 00:33:41.135 15:18:59 -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:41.135 15:18:59 -- target/dif.sh@43 -- # local sub 00:33:41.135 15:18:59 -- target/dif.sh@45 -- # for sub in "$@" 00:33:41.135 15:18:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:41.136 15:18:59 -- target/dif.sh@36 -- # local sub_id=0 00:33:41.136 15:18:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:41.136 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.136 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.136 15:18:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:41.136 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.136 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.136 15:18:59 -- target/dif.sh@109 -- # NULL_DIF=2 00:33:41.136 15:18:59 -- target/dif.sh@109 -- # bs=4k 00:33:41.136 15:18:59 -- target/dif.sh@109 -- # numjobs=8 00:33:41.136 15:18:59 -- target/dif.sh@109 -- # iodepth=16 00:33:41.136 15:18:59 -- target/dif.sh@109 -- # runtime= 00:33:41.136 15:18:59 -- target/dif.sh@109 -- # files=2 00:33:41.136 15:18:59 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:41.136 15:18:59 -- target/dif.sh@28 -- # local sub 00:33:41.136 15:18:59 -- target/dif.sh@30 -- # for sub in "$@" 00:33:41.136 15:18:59 -- target/dif.sh@31 -- # create_subsystem 0 00:33:41.136 15:18:59 -- target/dif.sh@18 -- # local sub_id=0 00:33:41.136 15:18:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:41.136 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.136 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 bdev_null0 00:33:41.136 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.136 15:18:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:41.136 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.136 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.136 15:18:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:41.136 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.136 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.136 15:18:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:41.136 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.136 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 [2024-06-11 15:18:59.939750] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:41.136 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.136 15:18:59 -- target/dif.sh@30 -- # for sub in "$@" 00:33:41.136 15:18:59 -- target/dif.sh@31 -- # create_subsystem 1 00:33:41.136 15:18:59 -- target/dif.sh@18 -- # local sub_id=1 00:33:41.136 15:18:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:41.136 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.136 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 bdev_null1 00:33:41.136 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.136 15:18:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:41.136 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.136 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.136 15:18:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:41.136 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.136 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.136 15:18:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:41.136 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.136 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.136 15:18:59 -- target/dif.sh@30 -- # for sub in "$@" 00:33:41.136 15:18:59 -- target/dif.sh@31 -- # create_subsystem 2 00:33:41.396 15:18:59 -- target/dif.sh@18 -- # local sub_id=2 00:33:41.396 15:18:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:41.396 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.396 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 bdev_null2 00:33:41.396 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.396 15:18:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:41.396 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.396 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.396 15:18:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:41.396 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.396 15:18:59 -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 15:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.396 15:18:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:41.396 15:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.396 15:19:00 -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 15:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.396 15:19:00 -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:41.396 15:19:00 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:41.396 15:19:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:41.396 15:19:00 -- nvmf/common.sh@520 -- # config=() 00:33:41.396 15:19:00 -- nvmf/common.sh@520 -- # local subsystem config 00:33:41.396 15:19:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:41.396 15:19:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:41.396 { 00:33:41.396 "params": { 00:33:41.396 "name": "Nvme$subsystem", 00:33:41.396 "trtype": "$TEST_TRANSPORT", 00:33:41.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:41.396 "adrfam": "ipv4", 00:33:41.396 "trsvcid": "$NVMF_PORT", 00:33:41.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:41.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:41.396 "hdgst": ${hdgst:-false}, 00:33:41.396 "ddgst": ${ddgst:-false} 00:33:41.396 }, 00:33:41.396 "method": "bdev_nvme_attach_controller" 00:33:41.396 } 00:33:41.396 EOF 00:33:41.396 )") 00:33:41.396 15:19:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:41.396 15:19:00 -- target/dif.sh@82 -- # gen_fio_conf 00:33:41.396 15:19:00 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:41.396 15:19:00 -- target/dif.sh@54 -- # local file 00:33:41.396 15:19:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:41.396 15:19:00 -- target/dif.sh@56 -- # cat 00:33:41.396 15:19:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:41.396 15:19:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:41.396 15:19:00 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:41.396 15:19:00 -- common/autotest_common.sh@1320 -- # shift 00:33:41.396 15:19:00 -- nvmf/common.sh@542 -- # cat 00:33:41.396 15:19:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:41.396 15:19:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:41.396 15:19:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:41.396 15:19:00 -- target/dif.sh@72 -- # (( file <= files )) 00:33:41.396 15:19:00 -- target/dif.sh@73 -- # cat 00:33:41.396 15:19:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:41.396 15:19:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:41.396 15:19:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:41.396 15:19:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:41.396 15:19:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:41.396 { 00:33:41.396 "params": { 00:33:41.396 "name": "Nvme$subsystem", 00:33:41.396 "trtype": "$TEST_TRANSPORT", 00:33:41.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:41.396 "adrfam": "ipv4", 00:33:41.396 "trsvcid": "$NVMF_PORT", 00:33:41.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:41.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:41.396 "hdgst": ${hdgst:-false}, 00:33:41.396 "ddgst": ${ddgst:-false} 00:33:41.396 }, 00:33:41.396 "method": "bdev_nvme_attach_controller" 00:33:41.396 } 00:33:41.396 EOF 00:33:41.396 )") 00:33:41.396 15:19:00 -- nvmf/common.sh@542 -- # cat 00:33:41.396 15:19:00 -- target/dif.sh@72 -- # (( file++ )) 00:33:41.396 15:19:00 -- target/dif.sh@72 -- # (( file <= files )) 00:33:41.396 15:19:00 -- target/dif.sh@73 -- # cat 00:33:41.396 15:19:00 -- target/dif.sh@72 -- # (( file++ )) 00:33:41.396 15:19:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:41.396 15:19:00 -- target/dif.sh@72 -- # (( file <= files )) 00:33:41.396 15:19:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:41.396 { 00:33:41.396 "params": { 00:33:41.396 "name": "Nvme$subsystem", 00:33:41.396 "trtype": "$TEST_TRANSPORT", 00:33:41.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:41.396 "adrfam": "ipv4", 00:33:41.396 "trsvcid": "$NVMF_PORT", 00:33:41.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:41.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:41.396 "hdgst": ${hdgst:-false}, 00:33:41.396 "ddgst": ${ddgst:-false} 00:33:41.396 }, 00:33:41.396 "method": "bdev_nvme_attach_controller" 00:33:41.396 } 00:33:41.396 EOF 00:33:41.396 )") 00:33:41.396 15:19:00 -- nvmf/common.sh@542 -- # cat 00:33:41.396 15:19:00 -- nvmf/common.sh@544 -- # jq . 00:33:41.396 15:19:00 -- nvmf/common.sh@545 -- # IFS=, 00:33:41.396 15:19:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:41.396 "params": { 00:33:41.396 "name": "Nvme0", 00:33:41.396 "trtype": "tcp", 00:33:41.396 "traddr": "10.0.0.2", 00:33:41.396 "adrfam": "ipv4", 00:33:41.396 "trsvcid": "4420", 00:33:41.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:41.396 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:41.396 "hdgst": false, 00:33:41.396 "ddgst": false 00:33:41.396 }, 00:33:41.396 "method": "bdev_nvme_attach_controller" 00:33:41.396 },{ 00:33:41.396 "params": { 00:33:41.396 "name": "Nvme1", 00:33:41.396 "trtype": "tcp", 00:33:41.396 "traddr": "10.0.0.2", 00:33:41.396 "adrfam": "ipv4", 00:33:41.396 "trsvcid": "4420", 00:33:41.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:41.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:41.396 "hdgst": false, 00:33:41.396 "ddgst": false 00:33:41.396 }, 00:33:41.396 "method": "bdev_nvme_attach_controller" 00:33:41.396 },{ 00:33:41.396 "params": { 00:33:41.396 "name": "Nvme2", 00:33:41.396 "trtype": "tcp", 00:33:41.396 "traddr": "10.0.0.2", 00:33:41.396 "adrfam": "ipv4", 00:33:41.396 "trsvcid": "4420", 00:33:41.396 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:41.396 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:41.396 "hdgst": false, 00:33:41.396 "ddgst": false 00:33:41.396 }, 00:33:41.396 "method": "bdev_nvme_attach_controller" 00:33:41.396 }' 00:33:41.396 15:19:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:41.396 15:19:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:41.396 15:19:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:41.396 15:19:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:41.396 15:19:00 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:41.396 15:19:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:41.396 15:19:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:41.396 15:19:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:41.396 15:19:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:41.396 15:19:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:41.656 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:41.656 ... 00:33:41.656 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:41.656 ... 00:33:41.656 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:41.656 ... 00:33:41.656 fio-3.35 00:33:41.656 Starting 24 threads 00:33:41.656 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.034 [2024-06-11 15:19:01.439017] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:43.034 [2024-06-11 15:19:01.439072] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:53.012 00:33:53.012 filename0: (groupid=0, jobs=1): err= 0: pid=3508671: Tue Jun 11 15:19:11 2024 00:33:53.012 read: IOPS=132, BW=531KiB/s (544kB/s)(5376KiB/10123msec) 00:33:53.012 slat (nsec): min=9337, max=84106, avg=20510.50, stdev=13776.35 00:33:53.012 clat (msec): min=6, max=670, avg=119.53, stdev=175.44 00:33:53.012 lat (msec): min=6, max=670, avg=119.55, stdev=175.44 00:33:53.012 clat percentiles (msec): 00:33:53.012 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.012 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.012 | 70.00th=[ 37], 80.00th=[ 47], 90.00th=[ 468], 95.00th=[ 481], 00:33:53.012 | 99.00th=[ 600], 99.50th=[ 625], 99.90th=[ 667], 99.95th=[ 667], 00:33:53.012 | 99.99th=[ 667] 00:33:53.012 bw ( KiB/s): min= 127, max= 1920, per=4.33%, avg=531.15, stdev=722.05, samples=20 00:33:53.012 iops : min= 31, max= 480, avg=132.75, stdev=180.53, samples=20 00:33:53.012 lat (msec) : 10=1.71%, 20=1.86%, 50=76.86%, 100=0.52%, 500=16.37% 00:33:53.012 lat (msec) : 750=2.68% 00:33:53.012 cpu : usr=99.01%, sys=0.58%, ctx=25, majf=0, minf=36 00:33:53.012 IO depths : 1=3.9%, 2=10.0%, 4=24.2%, 8=53.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:33:53.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.012 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.012 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.012 filename0: (groupid=0, jobs=1): err= 0: pid=3508672: Tue Jun 11 15:19:11 2024 00:33:53.012 read: IOPS=133, BW=535KiB/s (548kB/s)(5376KiB/10041msec) 00:33:53.012 slat (nsec): min=9396, max=75599, avg=19949.89, stdev=10563.89 00:33:53.012 clat (msec): min=9, max=529, avg=119.34, stdev=172.13 00:33:53.012 lat (msec): min=9, max=529, avg=119.36, stdev=172.13 00:33:53.012 clat percentiles (msec): 00:33:53.012 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.012 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:33:53.013 | 70.00th=[ 37], 80.00th=[ 102], 90.00th=[ 468], 95.00th=[ 472], 00:33:53.013 | 99.00th=[ 531], 99.50th=[ 531], 99.90th=[ 531], 99.95th=[ 531], 00:33:53.013 | 99.99th=[ 531] 00:33:53.013 bw ( KiB/s): min= 128, max= 1923, per=4.81%, avg=590.39, stdev=739.07, samples=18 00:33:53.013 iops : min= 32, max= 480, avg=147.56, stdev=184.69, samples=18 00:33:53.013 lat (msec) : 10=1.64%, 20=1.41%, 50=76.26%, 100=0.45%, 250=1.19% 00:33:53.013 lat (msec) : 500=15.48%, 750=3.57% 00:33:53.013 cpu : usr=96.97%, sys=1.61%, ctx=40, majf=0, minf=31 00:33:53.013 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:53.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.013 filename0: (groupid=0, jobs=1): err= 0: pid=3508673: Tue Jun 11 15:19:11 2024 00:33:53.013 read: IOPS=127, BW=512KiB/s (524kB/s)(5120KiB/10006msec) 00:33:53.013 slat (nsec): min=5953, max=85234, avg=39280.61, stdev=19901.59 00:33:53.013 clat (msec): min=34, max=605, avg=124.72, stdev=177.48 00:33:53.013 lat (msec): min=34, max=605, avg=124.76, stdev=177.47 00:33:53.013 clat percentiles (msec): 00:33:53.013 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.013 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.013 | 70.00th=[ 37], 80.00th=[ 79], 90.00th=[ 468], 95.00th=[ 472], 00:33:53.013 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 609], 99.95th=[ 609], 00:33:53.013 | 99.99th=[ 609] 00:33:53.013 bw ( KiB/s): min= 128, max= 1792, per=4.28%, avg=525.47, stdev=691.80, samples=19 00:33:53.013 iops : min= 32, max= 448, avg=131.37, stdev=172.95, samples=19 00:33:53.013 lat (msec) : 50=78.75%, 100=1.25%, 500=16.25%, 750=3.75% 00:33:53.013 cpu : usr=98.43%, sys=0.71%, ctx=30, majf=0, minf=30 00:33:53.013 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:53.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 issued rwts: total=1280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.013 filename0: (groupid=0, jobs=1): err= 0: pid=3508674: Tue Jun 11 15:19:11 2024 00:33:53.013 read: IOPS=129, BW=517KiB/s (530kB/s)(5184KiB/10019msec) 00:33:53.013 slat (nsec): min=6326, max=82671, avg=30506.68, stdev=18555.56 00:33:53.013 clat (msec): min=26, max=628, avg=123.40, stdev=176.97 00:33:53.013 lat (msec): min=26, max=628, avg=123.44, stdev=176.97 00:33:53.013 clat percentiles (msec): 00:33:53.013 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.013 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:33:53.013 | 70.00th=[ 37], 80.00th=[ 51], 90.00th=[ 468], 95.00th=[ 481], 00:33:53.013 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:33:53.013 | 99.99th=[ 625] 00:33:53.013 bw ( KiB/s): min= 128, max= 1792, per=4.18%, avg=512.00, stdev=686.17, samples=20 00:33:53.013 iops : min= 32, max= 448, avg=128.00, stdev=171.54, samples=20 00:33:53.013 lat (msec) : 50=79.01%, 100=1.23%, 500=16.82%, 750=2.93% 00:33:53.013 cpu : usr=98.78%, sys=0.72%, ctx=18, majf=0, minf=22 00:33:53.013 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:33:53.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.013 filename0: (groupid=0, jobs=1): err= 0: pid=3508675: Tue Jun 11 15:19:11 2024 00:33:53.013 read: IOPS=128, BW=513KiB/s (526kB/s)(5184KiB/10097msec) 00:33:53.013 slat (nsec): min=5963, max=86423, avg=33248.67, stdev=16206.81 00:33:53.013 clat (msec): min=34, max=728, avg=123.56, stdev=177.32 00:33:53.013 lat (msec): min=34, max=728, avg=123.59, stdev=177.31 00:33:53.013 clat percentiles (msec): 00:33:53.013 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.013 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.013 | 70.00th=[ 37], 80.00th=[ 57], 90.00th=[ 468], 95.00th=[ 481], 00:33:53.013 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 726], 99.95th=[ 726], 00:33:53.013 | 99.99th=[ 726] 00:33:53.013 bw ( KiB/s): min= 112, max= 1792, per=4.18%, avg=512.15, stdev=686.45, samples=20 00:33:53.013 iops : min= 28, max= 448, avg=128.00, stdev=171.55, samples=20 00:33:53.013 lat (msec) : 50=79.01%, 100=1.23%, 500=16.05%, 750=3.70% 00:33:53.013 cpu : usr=98.94%, sys=0.59%, ctx=49, majf=0, minf=37 00:33:53.013 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:53.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.013 filename0: (groupid=0, jobs=1): err= 0: pid=3508676: Tue Jun 11 15:19:11 2024 00:33:53.013 read: IOPS=129, BW=520KiB/s (532kB/s)(5248KiB/10098msec) 00:33:53.013 slat (nsec): min=9254, max=75560, avg=19533.58, stdev=7736.39 00:33:53.013 clat (msec): min=14, max=665, avg=122.13, stdev=173.81 00:33:53.013 lat (msec): min=14, max=665, avg=122.15, stdev=173.81 00:33:53.013 clat percentiles (msec): 00:33:53.013 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.013 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:33:53.013 | 70.00th=[ 37], 80.00th=[ 103], 90.00th=[ 468], 95.00th=[ 481], 00:33:53.013 | 99.00th=[ 531], 99.50th=[ 531], 99.90th=[ 667], 99.95th=[ 667], 00:33:53.013 | 99.99th=[ 667] 00:33:53.013 bw ( KiB/s): min= 112, max= 1792, per=4.23%, avg=518.40, stdev=697.99, samples=20 00:33:53.013 iops : min= 28, max= 448, avg=129.60, stdev=174.50, samples=20 00:33:53.013 lat (msec) : 20=0.15%, 50=78.96%, 100=0.15%, 250=1.22%, 500=15.85% 00:33:53.013 lat (msec) : 750=3.66% 00:33:53.013 cpu : usr=99.07%, sys=0.53%, ctx=15, majf=0, minf=32 00:33:53.013 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:53.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.013 filename0: (groupid=0, jobs=1): err= 0: pid=3508678: Tue Jun 11 15:19:11 2024 00:33:53.013 read: IOPS=120, BW=482KiB/s (493kB/s)(4864KiB/10098msec) 00:33:53.013 slat (nsec): min=5339, max=93541, avg=38718.21, stdev=18962.90 00:33:53.013 clat (msec): min=34, max=994, avg=132.48, stdev=235.41 00:33:53.013 lat (msec): min=34, max=994, avg=132.52, stdev=235.40 00:33:53.013 clat percentiles (msec): 00:33:53.013 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.013 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.013 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 617], 95.00th=[ 760], 00:33:53.013 | 99.00th=[ 835], 99.50th=[ 919], 99.90th=[ 995], 99.95th=[ 995], 00:33:53.013 | 99.99th=[ 995] 00:33:53.013 bw ( KiB/s): min= 16, max= 1792, per=4.60%, avg=564.71, stdev=722.61, samples=17 00:33:53.013 iops : min= 4, max= 448, avg=141.18, stdev=180.65, samples=17 00:33:53.013 lat (msec) : 50=82.89%, 100=2.47%, 250=0.16%, 500=0.66%, 750=8.39% 00:33:53.013 lat (msec) : 1000=5.43% 00:33:53.013 cpu : usr=98.93%, sys=0.68%, ctx=14, majf=0, minf=28 00:33:53.013 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:53.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.013 filename0: (groupid=0, jobs=1): err= 0: pid=3508679: Tue Jun 11 15:19:11 2024 00:33:53.013 read: IOPS=126, BW=506KiB/s (519kB/s)(5104KiB/10078msec) 00:33:53.013 slat (nsec): min=4750, max=70719, avg=19954.53, stdev=10027.84 00:33:53.013 clat (msec): min=17, max=842, avg=126.27, stdev=184.40 00:33:53.013 lat (msec): min=17, max=842, avg=126.29, stdev=184.40 00:33:53.013 clat percentiles (msec): 00:33:53.013 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.013 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:33:53.013 | 70.00th=[ 37], 80.00th=[ 226], 90.00th=[ 468], 95.00th=[ 510], 00:33:53.013 | 99.00th=[ 768], 99.50th=[ 835], 99.90th=[ 844], 99.95th=[ 844], 00:33:53.013 | 99.99th=[ 844] 00:33:53.013 bw ( KiB/s): min= 48, max= 1792, per=4.10%, avg=503.90, stdev=677.07, samples=20 00:33:53.013 iops : min= 12, max= 448, avg=125.95, stdev=169.28, samples=20 00:33:53.013 lat (msec) : 20=0.16%, 50=77.51%, 100=1.33%, 250=2.19%, 500=13.79% 00:33:53.013 lat (msec) : 750=3.92%, 1000=1.10% 00:33:53.013 cpu : usr=98.37%, sys=1.18%, ctx=30, majf=0, minf=31 00:33:53.013 IO depths : 1=0.2%, 2=0.6%, 4=4.6%, 8=77.7%, 16=16.8%, 32=0.0%, >=64=0.0% 00:33:53.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 complete : 0=0.0%, 4=90.3%, 8=8.3%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.013 issued rwts: total=1276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.013 filename1: (groupid=0, jobs=1): err= 0: pid=3508680: Tue Jun 11 15:19:11 2024 00:33:53.013 read: IOPS=123, BW=495KiB/s (506kB/s)(4992KiB/10094msec) 00:33:53.013 slat (nsec): min=4971, max=91258, avg=37257.81, stdev=20124.68 00:33:53.013 clat (msec): min=26, max=911, avg=129.13, stdev=206.23 00:33:53.013 lat (msec): min=26, max=911, avg=129.17, stdev=206.23 00:33:53.013 clat percentiles (msec): 00:33:53.013 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.013 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.013 | 70.00th=[ 37], 80.00th=[ 44], 90.00th=[ 493], 95.00th=[ 617], 00:33:53.013 | 99.00th=[ 768], 99.50th=[ 768], 99.90th=[ 911], 99.95th=[ 911], 00:33:53.013 | 99.99th=[ 911] 00:33:53.013 bw ( KiB/s): min= 16, max= 1792, per=4.23%, avg=518.74, stdev=694.90, samples=19 00:33:53.014 iops : min= 4, max= 448, avg=129.68, stdev=173.72, samples=19 00:33:53.014 lat (msec) : 50=80.61%, 100=1.28%, 250=1.28%, 500=7.21%, 750=5.93% 00:33:53.014 lat (msec) : 1000=3.69% 00:33:53.014 cpu : usr=99.11%, sys=0.49%, ctx=23, majf=0, minf=29 00:33:53.014 IO depths : 1=0.6%, 2=4.9%, 4=23.1%, 8=59.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:33:53.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.014 filename1: (groupid=0, jobs=1): err= 0: pid=3508681: Tue Jun 11 15:19:11 2024 00:33:53.014 read: IOPS=128, BW=513KiB/s (526kB/s)(5184KiB/10097msec) 00:33:53.014 slat (nsec): min=7401, max=82741, avg=30789.01, stdev=14706.76 00:33:53.014 clat (msec): min=34, max=698, avg=123.58, stdev=177.18 00:33:53.014 lat (msec): min=34, max=698, avg=123.61, stdev=177.17 00:33:53.014 clat percentiles (msec): 00:33:53.014 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.014 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.014 | 70.00th=[ 37], 80.00th=[ 57], 90.00th=[ 468], 95.00th=[ 481], 00:33:53.014 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 701], 99.95th=[ 701], 00:33:53.014 | 99.99th=[ 701] 00:33:53.014 bw ( KiB/s): min= 112, max= 1792, per=4.18%, avg=512.00, stdev=686.19, samples=20 00:33:53.014 iops : min= 28, max= 448, avg=128.00, stdev=171.55, samples=20 00:33:53.014 lat (msec) : 50=79.01%, 100=1.23%, 500=16.05%, 750=3.70% 00:33:53.014 cpu : usr=99.16%, sys=0.41%, ctx=32, majf=0, minf=31 00:33:53.014 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:53.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.014 filename1: (groupid=0, jobs=1): err= 0: pid=3508682: Tue Jun 11 15:19:11 2024 00:33:53.014 read: IOPS=128, BW=515KiB/s (527kB/s)(5192KiB/10091msec) 00:33:53.014 slat (nsec): min=5584, max=79020, avg=19726.89, stdev=8767.13 00:33:53.014 clat (msec): min=17, max=517, avg=124.14, stdev=172.37 00:33:53.014 lat (msec): min=17, max=517, avg=124.16, stdev=172.37 00:33:53.014 clat percentiles (msec): 00:33:53.014 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.014 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:33:53.014 | 70.00th=[ 37], 80.00th=[ 275], 90.00th=[ 464], 95.00th=[ 481], 00:33:53.014 | 99.00th=[ 514], 99.50th=[ 518], 99.90th=[ 518], 99.95th=[ 518], 00:33:53.014 | 99.99th=[ 518] 00:33:53.014 bw ( KiB/s): min= 128, max= 1792, per=4.18%, avg=512.95, stdev=676.35, samples=20 00:33:53.014 iops : min= 32, max= 448, avg=128.20, stdev=169.03, samples=20 00:33:53.014 lat (msec) : 20=0.46%, 50=75.65%, 100=2.47%, 250=0.92%, 500=18.03% 00:33:53.014 lat (msec) : 750=2.47% 00:33:53.014 cpu : usr=98.58%, sys=0.75%, ctx=26, majf=0, minf=33 00:33:53.014 IO depths : 1=2.8%, 2=7.2%, 4=19.4%, 8=60.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:33:53.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 complete : 0=0.0%, 4=92.6%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 issued rwts: total=1298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.014 filename1: (groupid=0, jobs=1): err= 0: pid=3508683: Tue Jun 11 15:19:11 2024 00:33:53.014 read: IOPS=128, BW=513KiB/s (525kB/s)(5176KiB/10093msec) 00:33:53.014 slat (nsec): min=4382, max=90017, avg=39016.11, stdev=19009.16 00:33:53.014 clat (msec): min=34, max=796, avg=124.39, stdev=177.41 00:33:53.014 lat (msec): min=34, max=796, avg=124.43, stdev=177.40 00:33:53.014 clat percentiles (msec): 00:33:53.014 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.014 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.014 | 70.00th=[ 37], 80.00th=[ 97], 90.00th=[ 468], 95.00th=[ 472], 00:33:53.014 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 793], 99.95th=[ 793], 00:33:53.014 | 99.99th=[ 793] 00:33:53.014 bw ( KiB/s): min= 112, max= 1792, per=4.17%, avg=511.35, stdev=676.63, samples=20 00:33:53.014 iops : min= 28, max= 448, avg=127.80, stdev=169.10, samples=20 00:33:53.014 lat (msec) : 50=77.90%, 100=2.32%, 250=0.15%, 500=16.07%, 750=3.40% 00:33:53.014 lat (msec) : 1000=0.15% 00:33:53.014 cpu : usr=97.35%, sys=1.20%, ctx=87, majf=0, minf=33 00:33:53.014 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:53.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 issued rwts: total=1294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.014 filename1: (groupid=0, jobs=1): err= 0: pid=3508684: Tue Jun 11 15:19:11 2024 00:33:53.014 read: IOPS=120, BW=481KiB/s (492kB/s)(4840KiB/10071msec) 00:33:53.014 slat (nsec): min=6110, max=78952, avg=19625.28, stdev=11412.58 00:33:53.014 clat (msec): min=17, max=941, avg=133.05, stdev=234.67 00:33:53.014 lat (msec): min=17, max=941, avg=133.07, stdev=234.67 00:33:53.014 clat percentiles (msec): 00:33:53.014 | 1.00th=[ 26], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 36], 00:33:53.014 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:33:53.014 | 70.00th=[ 37], 80.00th=[ 45], 90.00th=[ 634], 95.00th=[ 776], 00:33:53.014 | 99.00th=[ 844], 99.50th=[ 844], 99.90th=[ 944], 99.95th=[ 944], 00:33:53.014 | 99.99th=[ 944] 00:33:53.014 bw ( KiB/s): min= 16, max= 1808, per=4.32%, avg=530.83, stdev=724.51, samples=18 00:33:53.014 iops : min= 4, max= 452, avg=132.67, stdev=181.07, samples=18 00:33:53.014 lat (msec) : 20=0.66%, 50=80.66%, 100=2.81%, 250=1.16%, 500=0.50% 00:33:53.014 lat (msec) : 750=7.77%, 1000=6.45% 00:33:53.014 cpu : usr=98.59%, sys=0.74%, ctx=127, majf=0, minf=35 00:33:53.014 IO depths : 1=0.7%, 2=3.9%, 4=13.4%, 8=68.0%, 16=14.0%, 32=0.0%, >=64=0.0% 00:33:53.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 complete : 0=0.0%, 4=91.7%, 8=4.9%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 issued rwts: total=1210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.014 filename1: (groupid=0, jobs=1): err= 0: pid=3508685: Tue Jun 11 15:19:11 2024 00:33:53.014 read: IOPS=134, BW=539KiB/s (552kB/s)(5400KiB/10020msec) 00:33:53.014 slat (nsec): min=9312, max=74891, avg=26244.47, stdev=15007.43 00:33:53.014 clat (msec): min=21, max=628, avg=118.54, stdev=175.07 00:33:53.014 lat (msec): min=21, max=628, avg=118.57, stdev=175.07 00:33:53.014 clat percentiles (msec): 00:33:53.014 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 36], 00:33:53.014 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.014 | 70.00th=[ 37], 80.00th=[ 51], 90.00th=[ 468], 95.00th=[ 472], 00:33:53.014 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:33:53.014 | 99.99th=[ 625] 00:33:53.014 bw ( KiB/s): min= 128, max= 2224, per=4.35%, avg=533.60, stdev=733.73, samples=20 00:33:53.014 iops : min= 32, max= 556, avg=133.40, stdev=183.43, samples=20 00:33:53.014 lat (msec) : 50=79.85%, 100=1.19%, 500=16.59%, 750=2.37% 00:33:53.014 cpu : usr=98.98%, sys=0.60%, ctx=32, majf=0, minf=30 00:33:53.014 IO depths : 1=4.4%, 2=9.7%, 4=22.2%, 8=55.6%, 16=8.1%, 32=0.0%, >=64=0.0% 00:33:53.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.014 filename1: (groupid=0, jobs=1): err= 0: pid=3508686: Tue Jun 11 15:19:11 2024 00:33:53.014 read: IOPS=135, BW=543KiB/s (556kB/s)(5496KiB/10122msec) 00:33:53.014 slat (nsec): min=9403, max=71298, avg=21118.93, stdev=11815.17 00:33:53.014 clat (msec): min=2, max=614, avg=117.52, stdev=170.17 00:33:53.014 lat (msec): min=2, max=614, avg=117.55, stdev=170.17 00:33:53.014 clat percentiles (msec): 00:33:53.014 | 1.00th=[ 3], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.014 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:33:53.014 | 70.00th=[ 37], 80.00th=[ 48], 90.00th=[ 468], 95.00th=[ 472], 00:33:53.014 | 99.00th=[ 523], 99.50th=[ 523], 99.90th=[ 617], 99.95th=[ 617], 00:33:53.014 | 99.99th=[ 617] 00:33:53.014 bw ( KiB/s): min= 128, max= 2048, per=4.44%, avg=544.00, stdev=732.09, samples=20 00:33:53.014 iops : min= 32, max= 512, avg=136.00, stdev=183.02, samples=20 00:33:53.014 lat (msec) : 4=1.16%, 10=3.28%, 20=0.22%, 50=75.55%, 100=0.15% 00:33:53.014 lat (msec) : 250=1.02%, 500=17.32%, 750=1.31% 00:33:53.014 cpu : usr=98.95%, sys=0.64%, ctx=11, majf=0, minf=35 00:33:53.014 IO depths : 1=4.9%, 2=11.0%, 4=24.4%, 8=52.0%, 16=7.7%, 32=0.0%, >=64=0.0% 00:33:53.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 issued rwts: total=1374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.014 filename1: (groupid=0, jobs=1): err= 0: pid=3508688: Tue Jun 11 15:19:11 2024 00:33:53.014 read: IOPS=129, BW=518KiB/s (530kB/s)(5240KiB/10118msec) 00:33:53.014 slat (nsec): min=9395, max=74893, avg=28325.34, stdev=16173.27 00:33:53.014 clat (msec): min=32, max=827, avg=123.14, stdev=173.97 00:33:53.014 lat (msec): min=32, max=827, avg=123.17, stdev=173.96 00:33:53.014 clat percentiles (msec): 00:33:53.014 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.014 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:33:53.014 | 70.00th=[ 37], 80.00th=[ 226], 90.00th=[ 468], 95.00th=[ 481], 00:33:53.014 | 99.00th=[ 523], 99.50th=[ 523], 99.90th=[ 827], 99.95th=[ 827], 00:33:53.014 | 99.99th=[ 827] 00:33:53.014 bw ( KiB/s): min= 112, max= 1792, per=4.22%, avg=517.60, stdev=683.34, samples=20 00:33:53.014 iops : min= 28, max= 448, avg=129.40, stdev=170.83, samples=20 00:33:53.014 lat (msec) : 50=78.17%, 100=1.22%, 250=1.22%, 500=17.56%, 750=1.68% 00:33:53.014 lat (msec) : 1000=0.15% 00:33:53.014 cpu : usr=97.86%, sys=1.10%, ctx=55, majf=0, minf=30 00:33:53.014 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:53.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.014 issued rwts: total=1310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.015 filename2: (groupid=0, jobs=1): err= 0: pid=3508689: Tue Jun 11 15:19:11 2024 00:33:53.015 read: IOPS=128, BW=513KiB/s (526kB/s)(5184KiB/10098msec) 00:33:53.015 slat (usec): min=9, max=113, avg=31.01, stdev=15.01 00:33:53.015 clat (msec): min=34, max=719, avg=123.58, stdev=177.24 00:33:53.015 lat (msec): min=34, max=719, avg=123.62, stdev=177.24 00:33:53.015 clat percentiles (msec): 00:33:53.015 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.015 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.015 | 70.00th=[ 37], 80.00th=[ 58], 90.00th=[ 468], 95.00th=[ 481], 00:33:53.015 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 718], 99.95th=[ 718], 00:33:53.015 | 99.99th=[ 718] 00:33:53.015 bw ( KiB/s): min= 112, max= 1792, per=4.18%, avg=512.00, stdev=686.19, samples=20 00:33:53.015 iops : min= 28, max= 448, avg=128.00, stdev=171.55, samples=20 00:33:53.015 lat (msec) : 50=79.01%, 100=1.23%, 500=16.05%, 750=3.70% 00:33:53.015 cpu : usr=98.80%, sys=0.73%, ctx=29, majf=0, minf=28 00:33:53.015 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.015 filename2: (groupid=0, jobs=1): err= 0: pid=3508690: Tue Jun 11 15:19:11 2024 00:33:53.015 read: IOPS=127, BW=511KiB/s (523kB/s)(5168KiB/10112msec) 00:33:53.015 slat (nsec): min=9271, max=71520, avg=21494.37, stdev=13183.56 00:33:53.015 clat (msec): min=26, max=741, avg=124.67, stdev=174.69 00:33:53.015 lat (msec): min=26, max=741, avg=124.69, stdev=174.69 00:33:53.015 clat percentiles (msec): 00:33:53.015 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.015 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:33:53.015 | 70.00th=[ 37], 80.00th=[ 338], 90.00th=[ 464], 95.00th=[ 477], 00:33:53.015 | 99.00th=[ 523], 99.50th=[ 575], 99.90th=[ 743], 99.95th=[ 743], 00:33:53.015 | 99.99th=[ 743] 00:33:53.015 bw ( KiB/s): min= 96, max= 1792, per=4.16%, avg=510.40, stdev=673.92, samples=20 00:33:53.015 iops : min= 24, max= 448, avg=127.60, stdev=168.48, samples=20 00:33:53.015 lat (msec) : 50=78.02%, 100=1.24%, 250=0.46%, 500=17.49%, 750=2.79% 00:33:53.015 cpu : usr=97.77%, sys=1.25%, ctx=39, majf=0, minf=67 00:33:53.015 IO depths : 1=1.9%, 2=4.3%, 4=11.4%, 8=69.4%, 16=13.0%, 32=0.0%, >=64=0.0% 00:33:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 complete : 0=0.0%, 4=91.2%, 8=5.5%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 issued rwts: total=1292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.015 filename2: (groupid=0, jobs=1): err= 0: pid=3508691: Tue Jun 11 15:19:11 2024 00:33:53.015 read: IOPS=120, BW=481KiB/s (493kB/s)(4856KiB/10095msec) 00:33:53.015 slat (nsec): min=5945, max=91141, avg=41441.51, stdev=17575.86 00:33:53.015 clat (msec): min=34, max=944, avg=132.64, stdev=235.57 00:33:53.015 lat (msec): min=34, max=944, avg=132.69, stdev=235.56 00:33:53.015 clat percentiles (msec): 00:33:53.015 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.015 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.015 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 617], 95.00th=[ 760], 00:33:53.015 | 99.00th=[ 835], 99.50th=[ 919], 99.90th=[ 944], 99.95th=[ 944], 00:33:53.015 | 99.99th=[ 944] 00:33:53.015 bw ( KiB/s): min= 16, max= 1792, per=4.11%, avg=504.58, stdev=704.63, samples=19 00:33:53.015 iops : min= 4, max= 448, avg=126.11, stdev=176.09, samples=19 00:33:53.015 lat (msec) : 50=83.03%, 100=1.40%, 250=1.07%, 500=0.66%, 750=8.40% 00:33:53.015 lat (msec) : 1000=5.44% 00:33:53.015 cpu : usr=99.28%, sys=0.31%, ctx=14, majf=0, minf=24 00:33:53.015 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 issued rwts: total=1214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.015 filename2: (groupid=0, jobs=1): err= 0: pid=3508692: Tue Jun 11 15:19:11 2024 00:33:53.015 read: IOPS=128, BW=513KiB/s (526kB/s)(5184KiB/10098msec) 00:33:53.015 slat (nsec): min=6348, max=90655, avg=33492.65, stdev=16378.52 00:33:53.015 clat (msec): min=25, max=867, avg=123.56, stdev=177.67 00:33:53.015 lat (msec): min=25, max=867, avg=123.59, stdev=177.66 00:33:53.015 clat percentiles (msec): 00:33:53.015 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.015 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.015 | 70.00th=[ 37], 80.00th=[ 57], 90.00th=[ 468], 95.00th=[ 472], 00:33:53.015 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 869], 99.95th=[ 869], 00:33:53.015 | 99.99th=[ 869] 00:33:53.015 bw ( KiB/s): min= 112, max= 1792, per=4.18%, avg=512.00, stdev=686.19, samples=20 00:33:53.015 iops : min= 28, max= 448, avg=128.00, stdev=171.55, samples=20 00:33:53.015 lat (msec) : 50=79.01%, 100=1.23%, 500=16.20%, 750=3.40%, 1000=0.15% 00:33:53.015 cpu : usr=99.10%, sys=0.43%, ctx=58, majf=0, minf=27 00:33:53.015 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:33:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.015 filename2: (groupid=0, jobs=1): err= 0: pid=3508693: Tue Jun 11 15:19:11 2024 00:33:53.015 read: IOPS=126, BW=508KiB/s (520kB/s)(5120KiB/10082msec) 00:33:53.015 slat (nsec): min=6513, max=95600, avg=36511.77, stdev=17853.80 00:33:53.015 clat (msec): min=34, max=877, avg=124.89, stdev=178.51 00:33:53.015 lat (msec): min=34, max=877, avg=124.93, stdev=178.50 00:33:53.015 clat percentiles (msec): 00:33:53.015 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.015 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.015 | 70.00th=[ 37], 80.00th=[ 78], 90.00th=[ 468], 95.00th=[ 472], 00:33:53.015 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 877], 99.95th=[ 877], 00:33:53.015 | 99.99th=[ 877] 00:33:53.015 bw ( KiB/s): min= 112, max= 1792, per=4.17%, avg=511.20, stdev=676.39, samples=20 00:33:53.015 iops : min= 28, max= 448, avg=127.80, stdev=169.10, samples=20 00:33:53.015 lat (msec) : 50=78.75%, 100=1.25%, 500=16.41%, 750=3.44%, 1000=0.16% 00:33:53.015 cpu : usr=97.98%, sys=1.11%, ctx=15, majf=0, minf=31 00:33:53.015 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:33:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 issued rwts: total=1280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.015 filename2: (groupid=0, jobs=1): err= 0: pid=3508694: Tue Jun 11 15:19:11 2024 00:33:53.015 read: IOPS=123, BW=495KiB/s (506kB/s)(4992KiB/10093msec) 00:33:53.015 slat (nsec): min=4805, max=85884, avg=35918.58, stdev=19459.91 00:33:53.015 clat (msec): min=27, max=907, avg=129.13, stdev=206.92 00:33:53.015 lat (msec): min=27, max=907, avg=129.16, stdev=206.91 00:33:53.015 clat percentiles (msec): 00:33:53.015 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.015 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.015 | 70.00th=[ 37], 80.00th=[ 44], 90.00th=[ 510], 95.00th=[ 617], 00:33:53.015 | 99.00th=[ 768], 99.50th=[ 768], 99.90th=[ 911], 99.95th=[ 911], 00:33:53.015 | 99.99th=[ 911] 00:33:53.015 bw ( KiB/s): min= 16, max= 1792, per=4.01%, avg=492.95, stdev=685.69, samples=20 00:33:53.015 iops : min= 4, max= 448, avg=123.20, stdev=171.36, samples=20 00:33:53.015 lat (msec) : 50=80.61%, 100=1.44%, 250=1.12%, 500=6.73%, 750=6.41% 00:33:53.015 lat (msec) : 1000=3.69% 00:33:53.015 cpu : usr=99.03%, sys=0.46%, ctx=36, majf=0, minf=27 00:33:53.015 IO depths : 1=0.4%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:33:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.015 filename2: (groupid=0, jobs=1): err= 0: pid=3508695: Tue Jun 11 15:19:11 2024 00:33:53.015 read: IOPS=131, BW=524KiB/s (537kB/s)(5296KiB/10098msec) 00:33:53.015 slat (nsec): min=9364, max=95458, avg=25392.80, stdev=13934.72 00:33:53.015 clat (msec): min=19, max=673, avg=121.01, stdev=176.08 00:33:53.015 lat (msec): min=20, max=673, avg=121.03, stdev=176.08 00:33:53.015 clat percentiles (msec): 00:33:53.015 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 36], 00:33:53.015 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:33:53.015 | 70.00th=[ 37], 80.00th=[ 59], 90.00th=[ 468], 95.00th=[ 481], 00:33:53.015 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 676], 99.95th=[ 676], 00:33:53.015 | 99.99th=[ 676] 00:33:53.015 bw ( KiB/s): min= 112, max= 1824, per=4.27%, avg=523.20, stdev=706.71, samples=20 00:33:53.015 iops : min= 28, max= 456, avg=130.80, stdev=176.68, samples=20 00:33:53.015 lat (msec) : 20=0.08%, 50=79.08%, 100=1.51%, 500=15.56%, 750=3.78% 00:33:53.015 cpu : usr=99.09%, sys=0.51%, ctx=8, majf=0, minf=40 00:33:53.015 IO depths : 1=3.2%, 2=8.8%, 4=22.7%, 8=56.0%, 16=9.4%, 32=0.0%, >=64=0.0% 00:33:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.015 issued rwts: total=1324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.015 filename2: (groupid=0, jobs=1): err= 0: pid=3508696: Tue Jun 11 15:19:11 2024 00:33:53.015 read: IOPS=133, BW=535KiB/s (548kB/s)(5424KiB/10133msec) 00:33:53.015 slat (nsec): min=7130, max=71075, avg=18138.43, stdev=10640.05 00:33:53.015 clat (msec): min=5, max=761, avg=119.05, stdev=172.40 00:33:53.015 lat (msec): min=5, max=761, avg=119.07, stdev=172.40 00:33:53.015 clat percentiles (msec): 00:33:53.015 | 1.00th=[ 6], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 36], 00:33:53.016 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:33:53.016 | 70.00th=[ 37], 80.00th=[ 107], 90.00th=[ 464], 95.00th=[ 477], 00:33:53.016 | 99.00th=[ 527], 99.50th=[ 584], 99.90th=[ 760], 99.95th=[ 760], 00:33:53.016 | 99.99th=[ 760] 00:33:53.016 bw ( KiB/s): min= 48, max= 1920, per=4.37%, avg=536.00, stdev=719.81, samples=20 00:33:53.016 iops : min= 12, max= 480, avg=134.00, stdev=179.95, samples=20 00:33:53.016 lat (msec) : 10=1.18%, 20=1.62%, 50=76.25%, 100=0.88%, 250=0.74% 00:33:53.016 lat (msec) : 500=16.81%, 750=2.21%, 1000=0.29% 00:33:53.016 cpu : usr=99.04%, sys=0.55%, ctx=16, majf=0, minf=30 00:33:53.016 IO depths : 1=4.2%, 2=8.8%, 4=19.8%, 8=58.6%, 16=8.6%, 32=0.0%, >=64=0.0% 00:33:53.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.016 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.016 issued rwts: total=1356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.016 00:33:53.016 Run status group 0 (all jobs): 00:33:53.016 READ: bw=12.0MiB/s (12.6MB/s), 481KiB/s-543KiB/s (492kB/s-556kB/s), io=121MiB (127MB), run=10006-10133msec 00:33:53.275 15:19:11 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:53.275 15:19:11 -- target/dif.sh@43 -- # local sub 00:33:53.275 15:19:11 -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.275 15:19:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:53.275 15:19:11 -- target/dif.sh@36 -- # local sub_id=0 00:33:53.275 15:19:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:53.275 15:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.275 15:19:11 -- common/autotest_common.sh@10 -- # set +x 00:33:53.275 15:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.275 15:19:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:53.275 15:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.275 15:19:11 -- common/autotest_common.sh@10 -- # set +x 00:33:53.275 15:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.275 15:19:11 -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.275 15:19:11 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:53.275 15:19:11 -- target/dif.sh@36 -- # local sub_id=1 00:33:53.275 15:19:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:53.275 15:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.275 15:19:11 -- common/autotest_common.sh@10 -- # set +x 00:33:53.275 15:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.275 15:19:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:53.275 15:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.275 15:19:11 -- common/autotest_common.sh@10 -- # set +x 00:33:53.275 15:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.275 15:19:11 -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.275 15:19:11 -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:53.275 15:19:11 -- target/dif.sh@36 -- # local sub_id=2 00:33:53.275 15:19:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:53.275 15:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.275 15:19:11 -- common/autotest_common.sh@10 -- # set +x 00:33:53.275 15:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.275 15:19:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:53.276 15:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.276 15:19:11 -- common/autotest_common.sh@10 -- # set +x 00:33:53.276 15:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.276 15:19:12 -- target/dif.sh@115 -- # NULL_DIF=1 00:33:53.276 15:19:12 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:53.276 15:19:12 -- target/dif.sh@115 -- # numjobs=2 00:33:53.276 15:19:12 -- target/dif.sh@115 -- # iodepth=8 00:33:53.276 15:19:12 -- target/dif.sh@115 -- # runtime=5 00:33:53.276 15:19:12 -- target/dif.sh@115 -- # files=1 00:33:53.276 15:19:12 -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:53.276 15:19:12 -- target/dif.sh@28 -- # local sub 00:33:53.276 15:19:12 -- target/dif.sh@30 -- # for sub in "$@" 00:33:53.276 15:19:12 -- target/dif.sh@31 -- # create_subsystem 0 00:33:53.276 15:19:12 -- target/dif.sh@18 -- # local sub_id=0 00:33:53.276 15:19:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:53.276 15:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.276 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:33:53.276 bdev_null0 00:33:53.276 15:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.276 15:19:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:53.276 15:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.276 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:33:53.276 15:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.276 15:19:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:53.276 15:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.276 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:33:53.276 15:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.276 15:19:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:53.276 15:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.276 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:33:53.276 [2024-06-11 15:19:12.030967] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.276 15:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.276 15:19:12 -- target/dif.sh@30 -- # for sub in "$@" 00:33:53.276 15:19:12 -- target/dif.sh@31 -- # create_subsystem 1 00:33:53.276 15:19:12 -- target/dif.sh@18 -- # local sub_id=1 00:33:53.276 15:19:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:53.276 15:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.276 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:33:53.276 bdev_null1 00:33:53.276 15:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.276 15:19:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:53.276 15:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.276 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:33:53.276 15:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.276 15:19:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:53.276 15:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.276 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:33:53.276 15:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.276 15:19:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.276 15:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.276 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:33:53.276 15:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.276 15:19:12 -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:53.276 15:19:12 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:53.276 15:19:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.276 15:19:12 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.276 15:19:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:53.276 15:19:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:53.276 15:19:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:53.276 15:19:12 -- nvmf/common.sh@520 -- # config=() 00:33:53.276 15:19:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:53.276 15:19:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.276 15:19:12 -- nvmf/common.sh@520 -- # local subsystem config 00:33:53.276 15:19:12 -- target/dif.sh@82 -- # gen_fio_conf 00:33:53.276 15:19:12 -- common/autotest_common.sh@1320 -- # shift 00:33:53.276 15:19:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:53.276 15:19:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:53.276 15:19:12 -- target/dif.sh@54 -- # local file 00:33:53.276 15:19:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:53.276 15:19:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:53.276 { 00:33:53.276 "params": { 00:33:53.276 "name": "Nvme$subsystem", 00:33:53.276 "trtype": "$TEST_TRANSPORT", 00:33:53.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.276 "adrfam": "ipv4", 00:33:53.276 "trsvcid": "$NVMF_PORT", 00:33:53.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.276 "hdgst": ${hdgst:-false}, 00:33:53.276 "ddgst": ${ddgst:-false} 00:33:53.276 }, 00:33:53.276 "method": "bdev_nvme_attach_controller" 00:33:53.276 } 00:33:53.276 EOF 00:33:53.276 )") 00:33:53.276 15:19:12 -- target/dif.sh@56 -- # cat 00:33:53.276 15:19:12 -- nvmf/common.sh@542 -- # cat 00:33:53.276 15:19:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.276 15:19:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:53.276 15:19:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:53.276 15:19:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:53.276 15:19:12 -- target/dif.sh@72 -- # (( file <= files )) 00:33:53.276 15:19:12 -- target/dif.sh@73 -- # cat 00:33:53.276 15:19:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:53.276 15:19:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:53.276 { 00:33:53.276 "params": { 00:33:53.276 "name": "Nvme$subsystem", 00:33:53.276 "trtype": "$TEST_TRANSPORT", 00:33:53.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.276 "adrfam": "ipv4", 00:33:53.276 "trsvcid": "$NVMF_PORT", 00:33:53.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.276 "hdgst": ${hdgst:-false}, 00:33:53.276 "ddgst": ${ddgst:-false} 00:33:53.276 }, 00:33:53.276 "method": "bdev_nvme_attach_controller" 00:33:53.276 } 00:33:53.276 EOF 00:33:53.276 )") 00:33:53.276 15:19:12 -- target/dif.sh@72 -- # (( file++ )) 00:33:53.276 15:19:12 -- target/dif.sh@72 -- # (( file <= files )) 00:33:53.276 15:19:12 -- nvmf/common.sh@542 -- # cat 00:33:53.276 15:19:12 -- nvmf/common.sh@544 -- # jq . 00:33:53.276 15:19:12 -- nvmf/common.sh@545 -- # IFS=, 00:33:53.276 15:19:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:53.276 "params": { 00:33:53.276 "name": "Nvme0", 00:33:53.276 "trtype": "tcp", 00:33:53.276 "traddr": "10.0.0.2", 00:33:53.276 "adrfam": "ipv4", 00:33:53.276 "trsvcid": "4420", 00:33:53.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:53.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:53.276 "hdgst": false, 00:33:53.276 "ddgst": false 00:33:53.276 }, 00:33:53.276 "method": "bdev_nvme_attach_controller" 00:33:53.276 },{ 00:33:53.276 "params": { 00:33:53.276 "name": "Nvme1", 00:33:53.276 "trtype": "tcp", 00:33:53.276 "traddr": "10.0.0.2", 00:33:53.276 "adrfam": "ipv4", 00:33:53.276 "trsvcid": "4420", 00:33:53.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.276 "hdgst": false, 00:33:53.276 "ddgst": false 00:33:53.276 }, 00:33:53.276 "method": "bdev_nvme_attach_controller" 00:33:53.276 }' 00:33:53.276 15:19:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:53.276 15:19:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:53.276 15:19:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:53.276 15:19:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.276 15:19:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:53.276 15:19:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:53.563 15:19:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:53.563 15:19:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:53.563 15:19:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:53.563 15:19:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.826 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:53.826 ... 00:33:53.826 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:53.826 ... 00:33:53.826 fio-3.35 00:33:53.826 Starting 4 threads 00:33:53.826 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.390 [2024-06-11 15:19:13.207423] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:54.390 [2024-06-11 15:19:13.207463] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:59.651 00:33:59.651 filename0: (groupid=0, jobs=1): err= 0: pid=3510813: Tue Jun 11 15:19:18 2024 00:33:59.651 read: IOPS=1850, BW=14.5MiB/s (15.2MB/s)(72.3MiB/5002msec) 00:33:59.651 slat (nsec): min=9170, max=38984, avg=12147.50, stdev=3160.99 00:33:59.651 clat (usec): min=1538, max=47809, avg=4288.75, stdev=1399.37 00:33:59.651 lat (usec): min=1548, max=47834, avg=4300.90, stdev=1399.32 00:33:59.651 clat percentiles (usec): 00:33:59.651 | 1.00th=[ 2966], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3916], 00:33:59.651 | 30.00th=[ 4015], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4293], 00:33:59.651 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4948], 95.00th=[ 5407], 00:33:59.651 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 7242], 99.95th=[47973], 00:33:59.651 | 99.99th=[47973] 00:33:59.651 bw ( KiB/s): min=13531, max=15040, per=25.23%, avg=14778.11, stdev=479.42, samples=9 00:33:59.651 iops : min= 1691, max= 1880, avg=1847.00, stdev=59.87, samples=9 00:33:59.651 lat (msec) : 2=0.03%, 4=28.27%, 10=71.62%, 50=0.09% 00:33:59.651 cpu : usr=96.18%, sys=3.46%, ctx=6, majf=0, minf=88 00:33:59.651 IO depths : 1=0.2%, 2=1.9%, 4=70.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.651 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.651 issued rwts: total=9255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:59.651 filename0: (groupid=0, jobs=1): err= 0: pid=3510814: Tue Jun 11 15:19:18 2024 00:33:59.651 read: IOPS=1742, BW=13.6MiB/s (14.3MB/s)(68.1MiB/5001msec) 00:33:59.651 slat (nsec): min=9163, max=32536, avg=12088.34, stdev=3215.10 00:33:59.651 clat (usec): min=1954, max=45217, avg=4554.91, stdev=1437.71 00:33:59.651 lat (usec): min=1969, max=45243, avg=4567.00, stdev=1437.67 00:33:59.651 clat percentiles (usec): 00:33:59.651 | 1.00th=[ 3195], 5.00th=[ 3621], 10.00th=[ 3851], 20.00th=[ 4047], 00:33:59.651 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:33:59.651 | 70.00th=[ 4555], 80.00th=[ 4883], 90.00th=[ 5669], 95.00th=[ 6194], 00:33:59.651 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 7635], 99.95th=[45351], 00:33:59.651 | 99.99th=[45351] 00:33:59.651 bw ( KiB/s): min=12576, max=14512, per=23.71%, avg=13889.78, stdev=539.77, samples=9 00:33:59.651 iops : min= 1572, max= 1814, avg=1736.22, stdev=67.47, samples=9 00:33:59.651 lat (msec) : 2=0.01%, 4=15.72%, 10=84.18%, 50=0.09% 00:33:59.651 cpu : usr=96.26%, sys=3.32%, ctx=7, majf=0, minf=69 00:33:59.651 IO depths : 1=0.2%, 2=3.3%, 4=68.5%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.651 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.651 issued rwts: total=8716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:59.651 filename1: (groupid=0, jobs=1): err= 0: pid=3510815: Tue Jun 11 15:19:18 2024 00:33:59.651 read: IOPS=1879, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5002msec) 00:33:59.651 slat (nsec): min=9128, max=64060, avg=12254.28, stdev=3325.09 00:33:59.651 clat (usec): min=1679, max=7384, avg=4221.39, stdev=605.74 00:33:59.651 lat (usec): min=1689, max=7394, avg=4233.64, stdev=605.77 00:33:59.651 clat percentiles (usec): 00:33:59.651 | 1.00th=[ 2671], 5.00th=[ 3261], 10.00th=[ 3589], 20.00th=[ 3884], 00:33:59.651 | 30.00th=[ 4015], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4293], 00:33:59.651 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4948], 95.00th=[ 5342], 00:33:59.651 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 6980], 00:33:59.651 | 99.99th=[ 7373] 00:33:59.651 bw ( KiB/s): min=14448, max=15840, per=25.68%, avg=15040.00, stdev=377.53, samples=9 00:33:59.651 iops : min= 1806, max= 1980, avg=1880.00, stdev=47.19, samples=9 00:33:59.651 lat (msec) : 2=0.22%, 4=29.00%, 10=70.78% 00:33:59.651 cpu : usr=96.10%, sys=3.48%, ctx=9, majf=0, minf=146 00:33:59.651 IO depths : 1=0.1%, 2=2.7%, 4=69.2%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.651 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.651 issued rwts: total=9401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:59.651 filename1: (groupid=0, jobs=1): err= 0: pid=3510816: Tue Jun 11 15:19:18 2024 00:33:59.651 read: IOPS=1851, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5003msec) 00:33:59.651 slat (nsec): min=2514, max=23183, avg=7862.76, stdev=2945.55 00:33:59.651 clat (usec): min=1666, max=7701, avg=4296.90, stdev=705.20 00:33:59.651 lat (usec): min=1671, max=7714, avg=4304.77, stdev=705.15 00:33:59.651 clat percentiles (usec): 00:33:59.651 | 1.00th=[ 2540], 5.00th=[ 3294], 10.00th=[ 3589], 20.00th=[ 3884], 00:33:59.651 | 30.00th=[ 4047], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4293], 00:33:59.651 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5211], 95.00th=[ 5800], 00:33:59.651 | 99.00th=[ 6521], 99.50th=[ 6718], 99.90th=[ 7111], 99.95th=[ 7439], 00:33:59.651 | 99.99th=[ 7701] 00:33:59.651 bw ( KiB/s): min=14560, max=15488, per=25.36%, avg=14853.33, stdev=307.04, samples=9 00:33:59.651 iops : min= 1820, max= 1936, avg=1856.67, stdev=38.38, samples=9 00:33:59.651 lat (msec) : 2=0.42%, 4=26.37%, 10=73.21% 00:33:59.651 cpu : usr=97.36%, sys=2.26%, ctx=9, majf=0, minf=32 00:33:59.651 IO depths : 1=0.4%, 2=3.6%, 4=68.6%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.651 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.651 issued rwts: total=9261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:59.651 00:33:59.651 Run status group 0 (all jobs): 00:33:59.651 READ: bw=57.2MiB/s (60.0MB/s), 13.6MiB/s-14.7MiB/s (14.3MB/s-15.4MB/s), io=286MiB (300MB), run=5001-5003msec 00:33:59.910 15:19:18 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:59.910 15:19:18 -- target/dif.sh@43 -- # local sub 00:33:59.910 15:19:18 -- target/dif.sh@45 -- # for sub in "$@" 00:33:59.910 15:19:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:59.910 15:19:18 -- target/dif.sh@36 -- # local sub_id=0 00:33:59.910 15:19:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:59.910 15:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.910 15:19:18 -- common/autotest_common.sh@10 -- # set +x 00:33:59.910 15:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.910 15:19:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:59.910 15:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.910 15:19:18 -- common/autotest_common.sh@10 -- # set +x 00:33:59.910 15:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.910 15:19:18 -- target/dif.sh@45 -- # for sub in "$@" 00:33:59.910 15:19:18 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:59.910 15:19:18 -- target/dif.sh@36 -- # local sub_id=1 00:33:59.910 15:19:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:59.910 15:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.910 15:19:18 -- common/autotest_common.sh@10 -- # set +x 00:33:59.910 15:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.910 15:19:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:59.910 15:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.910 15:19:18 -- common/autotest_common.sh@10 -- # set +x 00:33:59.910 15:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.910 00:33:59.910 real 0m24.838s 00:33:59.910 user 5m8.898s 00:33:59.910 sys 0m4.130s 00:33:59.911 15:19:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:59.911 15:19:18 -- common/autotest_common.sh@10 -- # set +x 00:33:59.911 ************************************ 00:33:59.911 END TEST fio_dif_rand_params 00:33:59.911 ************************************ 00:33:59.911 15:19:18 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:59.911 15:19:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:59.911 15:19:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:59.911 15:19:18 -- common/autotest_common.sh@10 -- # set +x 00:33:59.911 ************************************ 00:33:59.911 START TEST fio_dif_digest 00:33:59.911 ************************************ 00:33:59.911 15:19:18 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:33:59.911 15:19:18 -- target/dif.sh@123 -- # local NULL_DIF 00:33:59.911 15:19:18 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:59.911 15:19:18 -- target/dif.sh@125 -- # local hdgst ddgst 00:33:59.911 15:19:18 -- target/dif.sh@127 -- # NULL_DIF=3 00:33:59.911 15:19:18 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:59.911 15:19:18 -- target/dif.sh@127 -- # numjobs=3 00:33:59.911 15:19:18 -- target/dif.sh@127 -- # iodepth=3 00:33:59.911 15:19:18 -- target/dif.sh@127 -- # runtime=10 00:33:59.911 15:19:18 -- target/dif.sh@128 -- # hdgst=true 00:33:59.911 15:19:18 -- target/dif.sh@128 -- # ddgst=true 00:33:59.911 15:19:18 -- target/dif.sh@130 -- # create_subsystems 0 00:33:59.911 15:19:18 -- target/dif.sh@28 -- # local sub 00:33:59.911 15:19:18 -- target/dif.sh@30 -- # for sub in "$@" 00:33:59.911 15:19:18 -- target/dif.sh@31 -- # create_subsystem 0 00:33:59.911 15:19:18 -- target/dif.sh@18 -- # local sub_id=0 00:33:59.911 15:19:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:59.911 15:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.911 15:19:18 -- common/autotest_common.sh@10 -- # set +x 00:33:59.911 bdev_null0 00:33:59.911 15:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.911 15:19:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:59.911 15:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.911 15:19:18 -- common/autotest_common.sh@10 -- # set +x 00:33:59.911 15:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.911 15:19:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:59.911 15:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.911 15:19:18 -- common/autotest_common.sh@10 -- # set +x 00:33:59.911 15:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.911 15:19:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:59.911 15:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.911 15:19:18 -- common/autotest_common.sh@10 -- # set +x 00:33:59.911 [2024-06-11 15:19:18.636295] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.911 15:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.911 15:19:18 -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:59.911 15:19:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:59.911 15:19:18 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:59.911 15:19:18 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:59.911 15:19:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:59.911 15:19:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:59.911 15:19:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:59.911 15:19:18 -- nvmf/common.sh@520 -- # config=() 00:33:59.911 15:19:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:59.911 15:19:18 -- target/dif.sh@82 -- # gen_fio_conf 00:33:59.911 15:19:18 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:59.911 15:19:18 -- nvmf/common.sh@520 -- # local subsystem config 00:33:59.911 15:19:18 -- common/autotest_common.sh@1320 -- # shift 00:33:59.911 15:19:18 -- target/dif.sh@54 -- # local file 00:33:59.911 15:19:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:59.911 15:19:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:59.911 15:19:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:59.911 15:19:18 -- target/dif.sh@56 -- # cat 00:33:59.911 15:19:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:59.911 { 00:33:59.911 "params": { 00:33:59.911 "name": "Nvme$subsystem", 00:33:59.911 "trtype": "$TEST_TRANSPORT", 00:33:59.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:59.911 "adrfam": "ipv4", 00:33:59.911 "trsvcid": "$NVMF_PORT", 00:33:59.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:59.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:59.911 "hdgst": ${hdgst:-false}, 00:33:59.911 "ddgst": ${ddgst:-false} 00:33:59.911 }, 00:33:59.911 "method": "bdev_nvme_attach_controller" 00:33:59.911 } 00:33:59.911 EOF 00:33:59.911 )") 00:33:59.911 15:19:18 -- nvmf/common.sh@542 -- # cat 00:33:59.911 15:19:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:59.911 15:19:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:59.911 15:19:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:59.911 15:19:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:59.911 15:19:18 -- target/dif.sh@72 -- # (( file <= files )) 00:33:59.911 15:19:18 -- nvmf/common.sh@544 -- # jq . 00:33:59.911 15:19:18 -- nvmf/common.sh@545 -- # IFS=, 00:33:59.911 15:19:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:59.911 "params": { 00:33:59.911 "name": "Nvme0", 00:33:59.911 "trtype": "tcp", 00:33:59.911 "traddr": "10.0.0.2", 00:33:59.911 "adrfam": "ipv4", 00:33:59.911 "trsvcid": "4420", 00:33:59.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:59.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:59.911 "hdgst": true, 00:33:59.911 "ddgst": true 00:33:59.911 }, 00:33:59.911 "method": "bdev_nvme_attach_controller" 00:33:59.911 }' 00:33:59.911 15:19:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:59.911 15:19:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:59.911 15:19:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:59.911 15:19:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:59.911 15:19:18 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:59.911 15:19:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:59.911 15:19:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:59.911 15:19:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:59.911 15:19:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:59.911 15:19:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.482 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:00.482 ... 00:34:00.482 fio-3.35 00:34:00.482 Starting 3 threads 00:34:00.482 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.740 [2024-06-11 15:19:19.516267] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:00.740 [2024-06-11 15:19:19.516314] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:12.933 00:34:12.934 filename0: (groupid=0, jobs=1): err= 0: pid=3512051: Tue Jun 11 15:19:29 2024 00:34:12.934 read: IOPS=196, BW=24.6MiB/s (25.7MB/s)(247MiB/10050msec) 00:34:12.934 slat (nsec): min=9278, max=32867, avg=14913.76, stdev=2558.94 00:34:12.934 clat (usec): min=6525, max=59992, avg=15233.92, stdev=8360.60 00:34:12.934 lat (usec): min=6538, max=60007, avg=15248.83, stdev=8360.99 00:34:12.934 clat percentiles (usec): 00:34:12.934 | 1.00th=[ 6980], 5.00th=[ 7570], 10.00th=[ 8848], 20.00th=[10683], 00:34:12.934 | 30.00th=[11863], 40.00th=[13304], 50.00th=[14877], 60.00th=[15664], 00:34:12.934 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17957], 95.00th=[19268], 00:34:12.934 | 99.00th=[57410], 99.50th=[58983], 99.90th=[60031], 99.95th=[60031], 00:34:12.934 | 99.99th=[60031] 00:34:12.934 bw ( KiB/s): min=19456, max=31744, per=39.91%, avg=25228.80, stdev=3336.26, samples=20 00:34:12.934 iops : min= 152, max= 248, avg=197.10, stdev=26.06, samples=20 00:34:12.934 lat (msec) : 10=15.50%, 20=80.60%, 50=0.46%, 100=3.44% 00:34:12.934 cpu : usr=95.06%, sys=4.56%, ctx=14, majf=0, minf=142 00:34:12.934 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.934 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.934 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:12.934 filename0: (groupid=0, jobs=1): err= 0: pid=3512052: Tue Jun 11 15:19:29 2024 00:34:12.934 read: IOPS=156, BW=19.6MiB/s (20.5MB/s)(197MiB/10047msec) 00:34:12.934 slat (nsec): min=9576, max=24760, avg=15179.12, stdev=2390.37 00:34:12.934 clat (usec): min=6615, max=96213, avg=19116.06, stdev=12643.71 00:34:12.934 lat (usec): min=6627, max=96229, avg=19131.24, stdev=12643.85 00:34:12.934 clat percentiles (usec): 00:34:12.934 | 1.00th=[ 6980], 5.00th=[ 9110], 10.00th=[11076], 20.00th=[12911], 00:34:12.934 | 30.00th=[14615], 40.00th=[15664], 50.00th=[16319], 60.00th=[16909], 00:34:12.934 | 70.00th=[17433], 80.00th=[17957], 90.00th=[20579], 95.00th=[56886], 00:34:12.934 | 99.00th=[59507], 99.50th=[60556], 99.90th=[95945], 99.95th=[95945], 00:34:12.934 | 99.99th=[95945] 00:34:12.934 bw ( KiB/s): min=12032, max=26880, per=31.79%, avg=20097.80, stdev=3432.18, samples=20 00:34:12.934 iops : min= 94, max= 210, avg=157.00, stdev=26.82, samples=20 00:34:12.934 lat (msec) : 10=6.42%, 20=82.64%, 50=1.72%, 100=9.22% 00:34:12.934 cpu : usr=95.74%, sys=3.92%, ctx=14, majf=0, minf=142 00:34:12.934 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.934 issued rwts: total=1573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.934 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:12.934 filename0: (groupid=0, jobs=1): err= 0: pid=3512053: Tue Jun 11 15:19:29 2024 00:34:12.934 read: IOPS=141, BW=17.7MiB/s (18.5MB/s)(177MiB/10007msec) 00:34:12.934 slat (nsec): min=9547, max=34127, avg=15670.99, stdev=2465.70 00:34:12.934 clat (msec): min=10, max=102, avg=21.18, stdev=13.90 00:34:12.934 lat (msec): min=10, max=102, avg=21.20, stdev=13.90 00:34:12.934 clat percentiles (msec): 00:34:12.934 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:34:12.934 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 17], 60.00th=[ 17], 00:34:12.934 | 70.00th=[ 18], 80.00th=[ 19], 90.00th=[ 56], 95.00th=[ 58], 00:34:12.934 | 99.00th=[ 61], 99.50th=[ 62], 99.90th=[ 102], 99.95th=[ 103], 00:34:12.934 | 99.99th=[ 103] 00:34:12.934 bw ( KiB/s): min=13824, max=21760, per=28.61%, avg=18086.40, stdev=2122.88, samples=20 00:34:12.934 iops : min= 108, max= 170, avg=141.30, stdev=16.59, samples=20 00:34:12.934 lat (msec) : 20=86.72%, 50=1.55%, 100=11.58%, 250=0.14% 00:34:12.934 cpu : usr=95.46%, sys=4.21%, ctx=16, majf=0, minf=115 00:34:12.934 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.934 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.934 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:12.934 00:34:12.934 Run status group 0 (all jobs): 00:34:12.934 READ: bw=61.7MiB/s (64.7MB/s), 17.7MiB/s-24.6MiB/s (18.5MB/s-25.7MB/s), io=620MiB (651MB), run=10007-10050msec 00:34:12.934 15:19:29 -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:12.934 15:19:29 -- target/dif.sh@43 -- # local sub 00:34:12.934 15:19:29 -- target/dif.sh@45 -- # for sub in "$@" 00:34:12.934 15:19:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:12.934 15:19:29 -- target/dif.sh@36 -- # local sub_id=0 00:34:12.934 15:19:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:12.934 15:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.934 15:19:29 -- common/autotest_common.sh@10 -- # set +x 00:34:12.934 15:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.934 15:19:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:12.934 15:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.934 15:19:29 -- common/autotest_common.sh@10 -- # set +x 00:34:12.934 15:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.934 00:34:12.934 real 0m11.276s 00:34:12.934 user 0m40.497s 00:34:12.934 sys 0m1.579s 00:34:12.934 15:19:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:12.934 15:19:29 -- common/autotest_common.sh@10 -- # set +x 00:34:12.934 ************************************ 00:34:12.934 END TEST fio_dif_digest 00:34:12.934 ************************************ 00:34:12.934 15:19:29 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:12.934 15:19:29 -- target/dif.sh@147 -- # nvmftestfini 00:34:12.934 15:19:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:12.934 15:19:29 -- nvmf/common.sh@116 -- # sync 00:34:12.934 15:19:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:12.934 15:19:29 -- nvmf/common.sh@119 -- # set +e 00:34:12.934 15:19:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:12.934 15:19:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:12.934 rmmod nvme_tcp 00:34:12.934 rmmod nvme_fabrics 00:34:12.934 rmmod nvme_keyring 00:34:12.934 15:19:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:12.934 15:19:29 -- nvmf/common.sh@123 -- # set -e 00:34:12.934 15:19:29 -- nvmf/common.sh@124 -- # return 0 00:34:12.934 15:19:29 -- nvmf/common.sh@477 -- # '[' -n 3502624 ']' 00:34:12.934 15:19:29 -- nvmf/common.sh@478 -- # killprocess 3502624 00:34:12.934 15:19:29 -- common/autotest_common.sh@926 -- # '[' -z 3502624 ']' 00:34:12.934 15:19:29 -- common/autotest_common.sh@930 -- # kill -0 3502624 00:34:12.934 15:19:29 -- common/autotest_common.sh@931 -- # uname 00:34:12.934 15:19:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:12.934 15:19:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3502624 00:34:12.934 15:19:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:12.934 15:19:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:12.934 15:19:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3502624' 00:34:12.934 killing process with pid 3502624 00:34:12.934 15:19:30 -- common/autotest_common.sh@945 -- # kill 3502624 00:34:12.934 15:19:30 -- common/autotest_common.sh@950 -- # wait 3502624 00:34:12.934 15:19:30 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:12.934 15:19:30 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:14.870 Waiting for block devices as requested 00:34:14.870 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:34:14.870 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:14.870 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:14.870 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:14.870 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:15.128 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:15.128 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:15.128 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:15.386 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:15.386 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:15.386 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:15.644 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:15.644 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:15.644 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:15.644 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:15.902 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:15.902 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:15.902 15:19:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:15.902 15:19:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:15.902 15:19:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:15.902 15:19:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:15.902 15:19:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.902 15:19:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:15.902 15:19:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.431 15:19:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:18.431 00:34:18.431 real 1m16.150s 00:34:18.431 user 7m43.806s 00:34:18.431 sys 0m18.802s 00:34:18.431 15:19:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:18.431 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:34:18.431 ************************************ 00:34:18.431 END TEST nvmf_dif 00:34:18.431 ************************************ 00:34:18.431 15:19:36 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:18.431 15:19:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:18.431 15:19:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:18.431 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:34:18.431 ************************************ 00:34:18.431 START TEST nvmf_abort_qd_sizes 00:34:18.431 ************************************ 00:34:18.431 15:19:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:18.431 * Looking for test storage... 00:34:18.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:18.431 15:19:36 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:18.431 15:19:36 -- nvmf/common.sh@7 -- # uname -s 00:34:18.431 15:19:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.431 15:19:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.431 15:19:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.431 15:19:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.431 15:19:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.431 15:19:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.431 15:19:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.431 15:19:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.431 15:19:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.431 15:19:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.431 15:19:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:18.431 15:19:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:18.431 15:19:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.431 15:19:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.431 15:19:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:18.431 15:19:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:18.431 15:19:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.431 15:19:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.431 15:19:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.431 15:19:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.431 15:19:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.431 15:19:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.431 15:19:36 -- paths/export.sh@5 -- # export PATH 00:34:18.431 15:19:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.431 15:19:36 -- nvmf/common.sh@46 -- # : 0 00:34:18.431 15:19:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:18.431 15:19:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:18.431 15:19:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:18.431 15:19:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.431 15:19:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.431 15:19:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:18.431 15:19:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:18.431 15:19:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:18.431 15:19:36 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:34:18.431 15:19:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:18.431 15:19:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.431 15:19:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:18.431 15:19:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:18.431 15:19:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:18.431 15:19:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.431 15:19:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:18.431 15:19:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.431 15:19:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:18.431 15:19:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:18.431 15:19:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:18.431 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:34:24.990 15:19:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:24.990 15:19:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:24.990 15:19:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:24.990 15:19:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:24.990 15:19:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:24.990 15:19:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:24.990 15:19:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:24.990 15:19:43 -- nvmf/common.sh@294 -- # net_devs=() 00:34:24.990 15:19:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:24.990 15:19:43 -- nvmf/common.sh@295 -- # e810=() 00:34:24.990 15:19:43 -- nvmf/common.sh@295 -- # local -ga e810 00:34:24.990 15:19:43 -- nvmf/common.sh@296 -- # x722=() 00:34:24.990 15:19:43 -- nvmf/common.sh@296 -- # local -ga x722 00:34:24.990 15:19:43 -- nvmf/common.sh@297 -- # mlx=() 00:34:24.990 15:19:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:24.990 15:19:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:24.990 15:19:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:24.990 15:19:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:24.990 15:19:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:24.990 15:19:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:24.990 15:19:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:24.990 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:24.990 15:19:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:24.990 15:19:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:24.990 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:24.990 15:19:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:24.990 15:19:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:24.990 15:19:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:24.990 15:19:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:24.990 15:19:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:24.990 15:19:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:24.990 15:19:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:24.990 Found net devices under 0000:af:00.0: cvl_0_0 00:34:24.990 15:19:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:24.990 15:19:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:24.990 15:19:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:24.990 15:19:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:24.990 15:19:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:24.990 15:19:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:24.990 Found net devices under 0000:af:00.1: cvl_0_1 00:34:24.990 15:19:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:24.991 15:19:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:24.991 15:19:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:24.991 15:19:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:24.991 15:19:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:24.991 15:19:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:24.991 15:19:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:24.991 15:19:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:24.991 15:19:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:24.991 15:19:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:24.991 15:19:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:24.991 15:19:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:24.991 15:19:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:24.991 15:19:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:24.991 15:19:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:24.991 15:19:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:24.991 15:19:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:24.991 15:19:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:24.991 15:19:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:24.991 15:19:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:24.991 15:19:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:24.991 15:19:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:24.991 15:19:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:24.991 15:19:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:24.991 15:19:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:24.991 15:19:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:24.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:24.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:34:24.991 00:34:24.991 --- 10.0.0.2 ping statistics --- 00:34:24.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.991 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:34:24.991 15:19:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:24.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:24.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:34:24.991 00:34:24.991 --- 10.0.0.1 ping statistics --- 00:34:24.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.991 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:34:24.991 15:19:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:24.991 15:19:43 -- nvmf/common.sh@410 -- # return 0 00:34:24.991 15:19:43 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:24.991 15:19:43 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:28.268 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:28.268 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:28.269 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:28.269 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:28.269 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:28.835 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:34:28.835 15:19:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.835 15:19:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:28.835 15:19:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:28.835 15:19:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.835 15:19:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:28.835 15:19:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:28.835 15:19:47 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:34:28.835 15:19:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:28.835 15:19:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:28.835 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:34:28.835 15:19:47 -- nvmf/common.sh@469 -- # nvmfpid=3521204 00:34:28.835 15:19:47 -- nvmf/common.sh@470 -- # waitforlisten 3521204 00:34:28.835 15:19:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:28.835 15:19:47 -- common/autotest_common.sh@819 -- # '[' -z 3521204 ']' 00:34:28.835 15:19:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.836 15:19:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:28.836 15:19:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.836 15:19:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:28.836 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:34:29.093 [2024-06-11 15:19:47.717019] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:34:29.093 [2024-06-11 15:19:47.717078] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.093 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.093 [2024-06-11 15:19:47.811216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:29.093 [2024-06-11 15:19:47.904362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:29.093 [2024-06-11 15:19:47.904495] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:29.093 [2024-06-11 15:19:47.904506] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:29.093 [2024-06-11 15:19:47.904515] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:29.093 [2024-06-11 15:19:47.904569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.093 [2024-06-11 15:19:47.904585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:29.093 [2024-06-11 15:19:47.904725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.093 [2024-06-11 15:19:47.904725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:30.025 15:19:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:30.025 15:19:48 -- common/autotest_common.sh@852 -- # return 0 00:34:30.025 15:19:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:30.025 15:19:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:30.025 15:19:48 -- common/autotest_common.sh@10 -- # set +x 00:34:30.025 15:19:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.025 15:19:48 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:30.025 15:19:48 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:34:30.025 15:19:48 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:34:30.025 15:19:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:34:30.025 15:19:48 -- scripts/common.sh@312 -- # local nvmes 00:34:30.025 15:19:48 -- scripts/common.sh@314 -- # [[ -n 0000:86:00.0 ]] 00:34:30.025 15:19:48 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:30.025 15:19:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:34:30.025 15:19:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:34:30.025 15:19:48 -- scripts/common.sh@322 -- # uname -s 00:34:30.025 15:19:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:34:30.025 15:19:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:34:30.025 15:19:48 -- scripts/common.sh@327 -- # (( 1 )) 00:34:30.025 15:19:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:86:00.0 00:34:30.025 15:19:48 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:34:30.025 15:19:48 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:86:00.0 00:34:30.025 15:19:48 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:34:30.025 15:19:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:30.025 15:19:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:30.025 15:19:48 -- common/autotest_common.sh@10 -- # set +x 00:34:30.025 ************************************ 00:34:30.025 START TEST spdk_target_abort 00:34:30.025 ************************************ 00:34:30.025 15:19:48 -- common/autotest_common.sh@1104 -- # spdk_target 00:34:30.025 15:19:48 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:30.025 15:19:48 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:30.025 15:19:48 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:34:30.025 15:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:30.025 15:19:48 -- common/autotest_common.sh@10 -- # set +x 00:34:33.301 spdk_targetn1 00:34:33.301 15:19:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:33.301 15:19:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.301 15:19:51 -- common/autotest_common.sh@10 -- # set +x 00:34:33.301 [2024-06-11 15:19:51.488097] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.301 15:19:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:34:33.301 15:19:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.301 15:19:51 -- common/autotest_common.sh@10 -- # set +x 00:34:33.301 15:19:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:34:33.301 15:19:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.301 15:19:51 -- common/autotest_common.sh@10 -- # set +x 00:34:33.301 15:19:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:34:33.301 15:19:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.301 15:19:51 -- common/autotest_common.sh@10 -- # set +x 00:34:33.301 [2024-06-11 15:19:51.524351] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.301 15:19:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:33.301 15:19:51 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:33.301 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.575 Initializing NVMe Controllers 00:34:36.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:36.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:36.575 Initialization complete. Launching workers. 00:34:36.575 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9006, failed: 0 00:34:36.575 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1754, failed to submit 7252 00:34:36.575 success 831, unsuccess 923, failed 0 00:34:36.575 15:19:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:36.575 15:19:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:36.575 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.848 Initializing NVMe Controllers 00:34:39.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:39.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:39.848 Initialization complete. Launching workers. 00:34:39.848 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8568, failed: 0 00:34:39.848 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1220, failed to submit 7348 00:34:39.848 success 350, unsuccess 870, failed 0 00:34:39.848 15:19:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:39.848 15:19:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:39.848 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.126 Initializing NVMe Controllers 00:34:43.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:43.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:43.126 Initialization complete. Launching workers. 00:34:43.126 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 37883, failed: 0 00:34:43.126 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2650, failed to submit 35233 00:34:43.126 success 598, unsuccess 2052, failed 0 00:34:43.126 15:20:01 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:34:43.126 15:20:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:43.126 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:34:43.126 15:20:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:43.126 15:20:01 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:43.126 15:20:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:43.126 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:34:44.059 15:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.059 15:20:02 -- target/abort_qd_sizes.sh@62 -- # killprocess 3521204 00:34:44.059 15:20:02 -- common/autotest_common.sh@926 -- # '[' -z 3521204 ']' 00:34:44.059 15:20:02 -- common/autotest_common.sh@930 -- # kill -0 3521204 00:34:44.059 15:20:02 -- common/autotest_common.sh@931 -- # uname 00:34:44.059 15:20:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:44.059 15:20:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3521204 00:34:44.059 15:20:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:44.059 15:20:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:44.059 15:20:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3521204' 00:34:44.059 killing process with pid 3521204 00:34:44.059 15:20:02 -- common/autotest_common.sh@945 -- # kill 3521204 00:34:44.059 15:20:02 -- common/autotest_common.sh@950 -- # wait 3521204 00:34:44.317 00:34:44.317 real 0m14.289s 00:34:44.317 user 0m56.849s 00:34:44.317 sys 0m2.155s 00:34:44.317 15:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:44.317 15:20:02 -- common/autotest_common.sh@10 -- # set +x 00:34:44.317 ************************************ 00:34:44.317 END TEST spdk_target_abort 00:34:44.317 ************************************ 00:34:44.317 15:20:02 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:34:44.317 15:20:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:44.317 15:20:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:44.317 15:20:02 -- common/autotest_common.sh@10 -- # set +x 00:34:44.317 ************************************ 00:34:44.317 START TEST kernel_target_abort 00:34:44.317 ************************************ 00:34:44.317 15:20:02 -- common/autotest_common.sh@1104 -- # kernel_target 00:34:44.317 15:20:02 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:34:44.317 15:20:02 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:34:44.317 15:20:02 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:34:44.317 15:20:02 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:34:44.317 15:20:02 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:34:44.317 15:20:02 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:44.317 15:20:02 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:44.317 15:20:02 -- nvmf/common.sh@627 -- # local block nvme 00:34:44.317 15:20:02 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:34:44.317 15:20:02 -- nvmf/common.sh@630 -- # modprobe nvmet 00:34:44.317 15:20:02 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:44.317 15:20:02 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:47.604 Waiting for block devices as requested 00:34:47.604 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:34:47.604 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:47.604 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:47.604 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:47.863 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:47.863 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:47.863 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:47.863 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:48.122 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:48.122 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:48.122 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:48.380 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:48.380 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:48.380 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:48.380 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:48.639 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:48.639 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:48.639 15:20:07 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:34:48.639 15:20:07 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:48.639 15:20:07 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:34:48.639 15:20:07 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:34:48.639 15:20:07 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:48.898 No valid GPT data, bailing 00:34:48.898 15:20:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:48.898 15:20:07 -- scripts/common.sh@393 -- # pt= 00:34:48.898 15:20:07 -- scripts/common.sh@394 -- # return 1 00:34:48.898 15:20:07 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:34:48.898 15:20:07 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:34:48.898 15:20:07 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:48.898 15:20:07 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:48.898 15:20:07 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:48.898 15:20:07 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:34:48.898 15:20:07 -- nvmf/common.sh@654 -- # echo 1 00:34:48.898 15:20:07 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:34:48.898 15:20:07 -- nvmf/common.sh@656 -- # echo 1 00:34:48.898 15:20:07 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:34:48.898 15:20:07 -- nvmf/common.sh@663 -- # echo tcp 00:34:48.898 15:20:07 -- nvmf/common.sh@664 -- # echo 4420 00:34:48.898 15:20:07 -- nvmf/common.sh@665 -- # echo ipv4 00:34:48.898 15:20:07 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:48.898 15:20:07 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:48.898 00:34:48.898 Discovery Log Number of Records 2, Generation counter 2 00:34:48.898 =====Discovery Log Entry 0====== 00:34:48.898 trtype: tcp 00:34:48.898 adrfam: ipv4 00:34:48.898 subtype: current discovery subsystem 00:34:48.898 treq: not specified, sq flow control disable supported 00:34:48.898 portid: 1 00:34:48.898 trsvcid: 4420 00:34:48.898 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:48.898 traddr: 10.0.0.1 00:34:48.898 eflags: none 00:34:48.898 sectype: none 00:34:48.898 =====Discovery Log Entry 1====== 00:34:48.898 trtype: tcp 00:34:48.898 adrfam: ipv4 00:34:48.898 subtype: nvme subsystem 00:34:48.898 treq: not specified, sq flow control disable supported 00:34:48.898 portid: 1 00:34:48.898 trsvcid: 4420 00:34:48.898 subnqn: kernel_target 00:34:48.898 traddr: 10.0.0.1 00:34:48.898 eflags: none 00:34:48.898 sectype: none 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:48.898 15:20:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:48.898 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.182 Initializing NVMe Controllers 00:34:52.182 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:52.182 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:52.182 Initialization complete. Launching workers. 00:34:52.182 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 37601, failed: 0 00:34:52.182 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 37601, failed to submit 0 00:34:52.182 success 0, unsuccess 37601, failed 0 00:34:52.182 15:20:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:52.182 15:20:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:52.182 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.463 Initializing NVMe Controllers 00:34:55.463 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:55.463 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:55.463 Initialization complete. Launching workers. 00:34:55.463 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68983, failed: 0 00:34:55.463 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 17414, failed to submit 51569 00:34:55.463 success 0, unsuccess 17414, failed 0 00:34:55.463 15:20:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:55.463 15:20:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:55.463 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.087 Initializing NVMe Controllers 00:34:58.087 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:58.087 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:58.087 Initialization complete. Launching workers. 00:34:58.087 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 67014, failed: 0 00:34:58.087 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 16742, failed to submit 50272 00:34:58.087 success 0, unsuccess 16742, failed 0 00:34:58.087 15:20:16 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:34:58.087 15:20:16 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:34:58.087 15:20:16 -- nvmf/common.sh@677 -- # echo 0 00:34:58.345 15:20:16 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:34:58.345 15:20:16 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:58.345 15:20:16 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:58.345 15:20:16 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:58.345 15:20:16 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:34:58.345 15:20:16 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:34:58.345 00:34:58.345 real 0m14.029s 00:34:58.345 user 0m5.990s 00:34:58.345 sys 0m3.710s 00:34:58.345 15:20:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:58.345 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:34:58.345 ************************************ 00:34:58.345 END TEST kernel_target_abort 00:34:58.345 ************************************ 00:34:58.345 15:20:17 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:34:58.345 15:20:17 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:34:58.345 15:20:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:58.345 15:20:17 -- nvmf/common.sh@116 -- # sync 00:34:58.345 15:20:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:58.345 15:20:17 -- nvmf/common.sh@119 -- # set +e 00:34:58.345 15:20:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:58.345 15:20:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:58.345 rmmod nvme_tcp 00:34:58.345 rmmod nvme_fabrics 00:34:58.345 rmmod nvme_keyring 00:34:58.345 15:20:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:58.345 15:20:17 -- nvmf/common.sh@123 -- # set -e 00:34:58.345 15:20:17 -- nvmf/common.sh@124 -- # return 0 00:34:58.345 15:20:17 -- nvmf/common.sh@477 -- # '[' -n 3521204 ']' 00:34:58.345 15:20:17 -- nvmf/common.sh@478 -- # killprocess 3521204 00:34:58.345 15:20:17 -- common/autotest_common.sh@926 -- # '[' -z 3521204 ']' 00:34:58.345 15:20:17 -- common/autotest_common.sh@930 -- # kill -0 3521204 00:34:58.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3521204) - No such process 00:34:58.345 15:20:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3521204 is not found' 00:34:58.345 Process with pid 3521204 is not found 00:34:58.345 15:20:17 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:58.345 15:20:17 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:01.630 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:35:01.630 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:01.630 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:01.630 15:20:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:01.630 15:20:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:01.630 15:20:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:01.630 15:20:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:01.630 15:20:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.630 15:20:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:01.630 15:20:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.530 15:20:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:03.530 00:35:03.530 real 0m45.465s 00:35:03.530 user 1m7.333s 00:35:03.530 sys 0m15.201s 00:35:03.530 15:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:03.530 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:35:03.530 ************************************ 00:35:03.530 END TEST nvmf_abort_qd_sizes 00:35:03.530 ************************************ 00:35:03.530 15:20:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:03.530 15:20:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:03.530 15:20:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:03.530 15:20:22 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:03.530 15:20:22 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:35:03.530 15:20:22 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:35:03.530 15:20:22 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:35:03.530 15:20:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:03.530 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:35:03.530 15:20:22 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:35:03.530 15:20:22 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:35:03.530 15:20:22 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:35:03.530 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:35:08.800 INFO: APP EXITING 00:35:08.800 INFO: killing all VMs 00:35:08.800 INFO: killing vhost app 00:35:08.800 WARN: no vhost pid file found 00:35:08.800 INFO: EXIT DONE 00:35:12.091 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:35:12.091 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:12.091 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:15.403 Cleaning 00:35:15.403 Removing: /var/run/dpdk/spdk0/config 00:35:15.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:15.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:15.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:15.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:15.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:15.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:15.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:15.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:15.403 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:15.403 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:15.403 Removing: /var/run/dpdk/spdk1/config 00:35:15.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:15.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:15.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:15.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:15.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:15.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:15.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:15.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:15.403 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:15.403 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:15.403 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:15.403 Removing: /var/run/dpdk/spdk2/config 00:35:15.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:15.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:15.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:15.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:15.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:15.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:15.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:15.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:15.403 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:15.403 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:15.403 Removing: /var/run/dpdk/spdk3/config 00:35:15.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:15.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:15.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:15.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:15.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:15.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:15.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:15.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:15.403 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:15.403 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:15.403 Removing: /var/run/dpdk/spdk4/config 00:35:15.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:15.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:15.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:15.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:15.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:15.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:15.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:15.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:15.403 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:15.403 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:15.403 Removing: /dev/shm/bdev_svc_trace.1 00:35:15.403 Removing: /dev/shm/nvmf_trace.0 00:35:15.403 Removing: /dev/shm/spdk_tgt_trace.pid3074521 00:35:15.403 Removing: /var/run/dpdk/spdk0 00:35:15.403 Removing: /var/run/dpdk/spdk1 00:35:15.403 Removing: /var/run/dpdk/spdk2 00:35:15.403 Removing: /var/run/dpdk/spdk3 00:35:15.403 Removing: /var/run/dpdk/spdk4 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3071933 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3073315 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3074521 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3075251 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3076979 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3078532 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3078861 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3079195 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3079542 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3080146 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3080594 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3080827 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3081122 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3082218 00:35:15.403 Removing: /var/run/dpdk/spdk_pid3085888 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3086199 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3086504 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3086710 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3087166 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3087343 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3087910 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3088177 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3088468 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3088553 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3088780 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3089044 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3089668 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3089896 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3090230 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3090563 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3090598 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3090666 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3090925 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3091215 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3091483 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3091770 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3092037 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3092324 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3092588 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3092872 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3093142 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3093423 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3093695 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3093977 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3094242 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3094529 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3094796 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3095085 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3095351 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3095634 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3095906 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3096185 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3096460 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3096739 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3097005 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3097293 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3097559 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3097848 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3098117 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3098396 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3098668 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3098950 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3099223 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3099503 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3099780 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3100063 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3100330 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3100623 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3100887 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3101174 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3101441 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3101729 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3101982 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3102380 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3106790 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3198278 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3203395 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3215367 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3221555 00:35:15.663 Removing: /var/run/dpdk/spdk_pid3226408 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3226968 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3236968 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3237485 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3242418 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3249283 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3252405 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3265091 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3275383 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3277378 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3278299 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3297666 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3302085 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3307520 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3309373 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3311498 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3311771 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3312047 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3312321 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3313207 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3315780 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3316957 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3317741 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3324002 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3330316 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3335733 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3377157 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3381544 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3388415 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3389910 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3391548 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3396316 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3401205 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3410360 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3410369 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3415768 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3416031 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3416294 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3416636 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3416762 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3418282 00:35:15.922 Removing: /var/run/dpdk/spdk_pid3420050 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3421894 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3423742 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3425596 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3427444 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3434288 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3434844 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3436965 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3438137 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3444900 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3448548 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3454827 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3461391 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3467682 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3468443 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3469143 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3469788 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3470641 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3471443 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3472218 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3472895 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3477940 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3478209 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3484923 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3485228 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3487647 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3496916 00:35:15.923 Removing: /var/run/dpdk/spdk_pid3496921 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3502801 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3504944 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3507063 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3508376 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3510545 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3511925 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3521970 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3522500 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3523031 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3525641 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3526185 00:35:16.183 Removing: /var/run/dpdk/spdk_pid3526729 00:35:16.183 Clean 00:35:16.183 killing process with pid 3018301 00:35:24.294 killing process with pid 3018298 00:35:24.294 killing process with pid 3018300 00:35:24.551 killing process with pid 3018299 00:35:24.551 15:20:43 -- common/autotest_common.sh@1436 -- # return 0 00:35:24.551 15:20:43 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:35:24.551 15:20:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:24.551 15:20:43 -- common/autotest_common.sh@10 -- # set +x 00:35:24.551 15:20:43 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:35:24.551 15:20:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:24.551 15:20:43 -- common/autotest_common.sh@10 -- # set +x 00:35:24.551 15:20:43 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:24.551 15:20:43 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:24.552 15:20:43 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:24.552 15:20:43 -- spdk/autotest.sh@394 -- # hash lcov 00:35:24.552 15:20:43 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:24.552 15:20:43 -- spdk/autotest.sh@396 -- # hostname 00:35:24.552 15:20:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:24.809 geninfo: WARNING: invalid characters removed from testname! 00:35:51.337 15:21:09 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:54.618 15:21:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.146 15:21:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:59.751 15:21:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:02.283 15:21:21 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:05.566 15:21:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:08.097 15:21:26 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:08.097 15:21:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:08.097 15:21:26 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:08.097 15:21:26 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:08.097 15:21:26 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:08.097 15:21:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.097 15:21:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.098 15:21:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.098 15:21:26 -- paths/export.sh@5 -- $ export PATH 00:36:08.098 15:21:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.098 15:21:26 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:08.098 15:21:26 -- common/autobuild_common.sh@435 -- $ date +%s 00:36:08.098 15:21:26 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718112086.XXXXXX 00:36:08.098 15:21:26 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718112086.VzJ98n 00:36:08.098 15:21:26 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:36:08.098 15:21:26 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:36:08.098 15:21:26 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:36:08.098 15:21:26 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:08.098 15:21:26 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:08.098 15:21:26 -- common/autobuild_common.sh@451 -- $ get_config_params 00:36:08.098 15:21:26 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:36:08.098 15:21:26 -- common/autotest_common.sh@10 -- $ set +x 00:36:08.098 15:21:26 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:36:08.098 15:21:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:36:08.098 15:21:26 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:08.098 15:21:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:08.098 15:21:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:08.098 15:21:26 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:08.098 15:21:26 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:08.098 15:21:26 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:08.098 15:21:26 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:08.098 15:21:26 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:08.098 15:21:26 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:08.098 + [[ -n 2975858 ]] 00:36:08.098 + sudo kill 2975858 00:36:08.107 [Pipeline] } 00:36:08.126 [Pipeline] // stage 00:36:08.131 [Pipeline] } 00:36:08.149 [Pipeline] // timeout 00:36:08.154 [Pipeline] } 00:36:08.172 [Pipeline] // catchError 00:36:08.178 [Pipeline] } 00:36:08.194 [Pipeline] // wrap 00:36:08.200 [Pipeline] } 00:36:08.213 [Pipeline] // catchError 00:36:08.223 [Pipeline] stage 00:36:08.225 [Pipeline] { (Epilogue) 00:36:08.239 [Pipeline] catchError 00:36:08.241 [Pipeline] { 00:36:08.254 [Pipeline] echo 00:36:08.255 Cleanup processes 00:36:08.260 [Pipeline] sh 00:36:08.542 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:08.543 3541785 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:08.558 [Pipeline] sh 00:36:08.841 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:08.841 ++ grep -v 'sudo pgrep' 00:36:08.841 ++ awk '{print $1}' 00:36:08.841 + sudo kill -9 00:36:08.841 + true 00:36:08.854 [Pipeline] sh 00:36:09.136 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:27.229 [Pipeline] sh 00:36:27.513 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:27.513 Artifacts sizes are good 00:36:27.527 [Pipeline] archiveArtifacts 00:36:27.535 Archiving artifacts 00:36:27.786 [Pipeline] sh 00:36:28.102 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:28.117 [Pipeline] cleanWs 00:36:28.127 [WS-CLEANUP] Deleting project workspace... 00:36:28.127 [WS-CLEANUP] Deferred wipeout is used... 00:36:28.134 [WS-CLEANUP] done 00:36:28.136 [Pipeline] } 00:36:28.158 [Pipeline] // catchError 00:36:28.171 [Pipeline] sh 00:36:28.448 + logger -p user.info -t JENKINS-CI 00:36:28.456 [Pipeline] } 00:36:28.473 [Pipeline] // stage 00:36:28.479 [Pipeline] } 00:36:28.496 [Pipeline] // node 00:36:28.502 [Pipeline] End of Pipeline 00:36:28.538 Finished: SUCCESS